Test Report: Docker_macOS 15331

                    
                      98ec3a9f03bfabd2eb54315516aa85163777fa99:2022-11-09:26489
                    
                

Test fail (16/295)

x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (254.28s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-darwin-amd64 start -p ingress-addon-legacy-101309 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker 
E1109 10:13:40.386478   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:15:56.528900   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:16:24.226889   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:16:45.274998   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.280497   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.290758   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.311473   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.351928   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.434212   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.596415   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:45.918631   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:46.560938   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:47.843248   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:50.405524   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:16:55.526160   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:17:05.768455   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ingress-addon-legacy-101309 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker : exit status 109 (4m14.24789552s)

                                                
                                                
-- stdout --
	* [ingress-addon-legacy-101309] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node ingress-addon-legacy-101309 in cluster ingress-addon-legacy-101309
	* Pulling base image ...
	* Downloading Kubernetes v1.18.20 preload ...
	* Creating docker container (CPUs=2, Memory=4096MB) ...
	* Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 10:13:09.101222   25528 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:13:09.101414   25528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:13:09.101419   25528 out.go:309] Setting ErrFile to fd 2...
	I1109 10:13:09.101428   25528 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:13:09.101544   25528 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:13:09.102094   25528 out.go:303] Setting JSON to false
	I1109 10:13:09.120887   25528 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11564,"bootTime":1668006025,"procs":386,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 10:13:09.120980   25528 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 10:13:09.142512   25528 out.go:177] * [ingress-addon-legacy-101309] minikube v1.28.0 on Darwin 13.0
	I1109 10:13:09.163984   25528 notify.go:220] Checking for updates...
	I1109 10:13:09.185313   25528 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 10:13:09.207023   25528 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:13:09.228454   25528 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 10:13:09.250406   25528 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 10:13:09.272338   25528 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 10:13:09.293622   25528 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 10:13:09.353877   25528 docker.go:137] docker version: linux-20.10.20
	I1109 10:13:09.354025   25528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:13:09.494583   25528 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-09 18:13:09.418916855 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:13:09.537993   25528 out.go:177] * Using the docker driver based on user configuration
	I1109 10:13:09.558792   25528 start.go:282] selected driver: docker
	I1109 10:13:09.558810   25528 start.go:808] validating driver "docker" against <nil>
	I1109 10:13:09.558829   25528 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 10:13:09.561393   25528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:13:09.701153   25528 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-09 18:13:09.627163036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:13:09.701269   25528 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1109 10:13:09.701414   25528 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 10:13:09.722954   25528 out.go:177] * Using Docker Desktop driver with root privileges
	I1109 10:13:09.743622   25528 cni.go:95] Creating CNI manager for ""
	I1109 10:13:09.743642   25528 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 10:13:09.743659   25528 start_flags.go:317] config:
	{Name:ingress-addon-legacy-101309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-101309 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:doc
ker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:13:09.764954   25528 out.go:177] * Starting control plane node ingress-addon-legacy-101309 in cluster ingress-addon-legacy-101309
	I1109 10:13:09.806917   25528 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 10:13:09.828664   25528 out.go:177] * Pulling base image ...
	I1109 10:13:09.870782   25528 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1109 10:13:09.870862   25528 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 10:13:09.925665   25528 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1109 10:13:09.925688   25528 cache.go:57] Caching tarball of preloaded images
	I1109 10:13:09.925910   25528 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1109 10:13:09.968628   25528 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I1109 10:13:09.979461   25528 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 10:13:09.989858   25528 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 10:13:09.989876   25528 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1109 10:13:10.072888   25528 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4?checksum=md5:ff35f06d4f6c0bac9297b8f85d8ebf70 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4
	I1109 10:13:14.710804   25528 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1109 10:13:14.711005   25528 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4 ...
	I1109 10:13:15.319140   25528 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on docker
	I1109 10:13:15.319427   25528 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/config.json ...
	I1109 10:13:15.319456   25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/config.json: {Name:mkc6c9654378d90b31df64c0b57677f0797202a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:13:15.319774   25528 cache.go:208] Successfully downloaded all kic artifacts
	I1109 10:13:15.319800   25528 start.go:364] acquiring machines lock for ingress-addon-legacy-101309: {Name:mk793ac2e4d48107a3d3957703e95cafe0d3757c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 10:13:15.319955   25528 start.go:368] acquired machines lock for "ingress-addon-legacy-101309" in 148.788µs
	I1109 10:13:15.320010   25528 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-101309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-101309 Namespace:default APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1109 10:13:15.320136   25528 start.go:125] createHost starting for "" (driver="docker")
	I1109 10:13:15.363919   25528 out.go:204] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1109 10:13:15.364241   25528 start.go:159] libmachine.API.Create for "ingress-addon-legacy-101309" (driver="docker")
	I1109 10:13:15.364284   25528 client.go:168] LocalClient.Create starting
	I1109 10:13:15.364491   25528 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem
	I1109 10:13:15.364575   25528 main.go:134] libmachine: Decoding PEM data...
	I1109 10:13:15.364606   25528 main.go:134] libmachine: Parsing certificate...
	I1109 10:13:15.364715   25528 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem
	I1109 10:13:15.364782   25528 main.go:134] libmachine: Decoding PEM data...
	I1109 10:13:15.364805   25528 main.go:134] libmachine: Parsing certificate...
	I1109 10:13:15.365771   25528 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-101309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 10:13:15.422737   25528 cli_runner.go:211] docker network inspect ingress-addon-legacy-101309 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 10:13:15.422869   25528 network_create.go:272] running [docker network inspect ingress-addon-legacy-101309] to gather additional debugging logs...
	I1109 10:13:15.422894   25528 cli_runner.go:164] Run: docker network inspect ingress-addon-legacy-101309
	W1109 10:13:15.477007   25528 cli_runner.go:211] docker network inspect ingress-addon-legacy-101309 returned with exit code 1
	I1109 10:13:15.477034   25528 network_create.go:275] error running [docker network inspect ingress-addon-legacy-101309]: docker network inspect ingress-addon-legacy-101309: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: ingress-addon-legacy-101309
	I1109 10:13:15.477058   25528 network_create.go:277] output of [docker network inspect ingress-addon-legacy-101309]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: ingress-addon-legacy-101309
	
	** /stderr **
	I1109 10:13:15.477188   25528 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 10:13:15.531631   25528 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000490118] misses:0}
	I1109 10:13:15.531675   25528 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 10:13:15.531692   25528 network_create.go:115] attempt to create docker network ingress-addon-legacy-101309 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 10:13:15.531806   25528 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ingress-addon-legacy-101309 ingress-addon-legacy-101309
	I1109 10:13:15.664612   25528 network_create.go:99] docker network ingress-addon-legacy-101309 192.168.49.0/24 created
	I1109 10:13:15.664649   25528 kic.go:106] calculated static IP "192.168.49.2" for the "ingress-addon-legacy-101309" container
	I1109 10:13:15.664784   25528 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 10:13:15.719673   25528 cli_runner.go:164] Run: docker volume create ingress-addon-legacy-101309 --label name.minikube.sigs.k8s.io=ingress-addon-legacy-101309 --label created_by.minikube.sigs.k8s.io=true
	I1109 10:13:15.775635   25528 oci.go:103] Successfully created a docker volume ingress-addon-legacy-101309
	I1109 10:13:15.775775   25528 cli_runner.go:164] Run: docker run --rm --name ingress-addon-legacy-101309-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-101309 --entrypoint /usr/bin/test -v ingress-addon-legacy-101309:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1109 10:13:16.226904   25528 oci.go:107] Successfully prepared a docker volume ingress-addon-legacy-101309
	I1109 10:13:16.226962   25528 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1109 10:13:16.226977   25528 kic.go:179] Starting extracting preloaded images to volume ...
	I1109 10:13:16.227099   25528 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-101309:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 10:13:20.696590   25528 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ingress-addon-legacy-101309:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.469404475s)
	I1109 10:13:20.696615   25528 kic.go:188] duration metric: took 4.469636 seconds to extract preloaded images to volume
	I1109 10:13:20.696761   25528 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 10:13:20.838465   25528 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ingress-addon-legacy-101309 --name ingress-addon-legacy-101309 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ingress-addon-legacy-101309 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ingress-addon-legacy-101309 --network ingress-addon-legacy-101309 --ip 192.168.49.2 --volume ingress-addon-legacy-101309:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1109 10:13:21.184522   25528 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-101309 --format={{.State.Running}}
	I1109 10:13:21.242235   25528 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-101309 --format={{.State.Status}}
	I1109 10:13:21.302750   25528 cli_runner.go:164] Run: docker exec ingress-addon-legacy-101309 stat /var/lib/dpkg/alternatives/iptables
	I1109 10:13:21.407466   25528 oci.go:144] the created container "ingress-addon-legacy-101309" has a running status.
	I1109 10:13:21.407503   25528 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa...
	I1109 10:13:21.461181   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1109 10:13:21.461262   25528 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 10:13:21.564180   25528 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-101309 --format={{.State.Status}}
	I1109 10:13:21.620204   25528 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 10:13:21.620223   25528 kic_runner.go:114] Args: [docker exec --privileged ingress-addon-legacy-101309 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 10:13:21.723780   25528 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-101309 --format={{.State.Status}}
	I1109 10:13:21.779486   25528 machine.go:88] provisioning docker machine ...
	I1109 10:13:21.779527   25528 ubuntu.go:169] provisioning hostname "ingress-addon-legacy-101309"
	I1109 10:13:21.779640   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:21.835979   25528 main.go:134] libmachine: Using SSH client type: native
	I1109 10:13:21.836179   25528 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 61702 <nil> <nil>}
	I1109 10:13:21.836196   25528 main.go:134] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-101309 && echo "ingress-addon-legacy-101309" | sudo tee /etc/hostname
	I1109 10:13:21.961851   25528 main.go:134] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-101309
	
	I1109 10:13:21.961962   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:22.019385   25528 main.go:134] libmachine: Using SSH client type: native
	I1109 10:13:22.019543   25528 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 61702 <nil> <nil>}
	I1109 10:13:22.019561   25528 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-101309' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-101309/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-101309' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 10:13:22.137077   25528 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:13:22.137103   25528 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 10:13:22.137125   25528 ubuntu.go:177] setting up certificates
	I1109 10:13:22.137133   25528 provision.go:83] configureAuth start
	I1109 10:13:22.137225   25528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-101309
	I1109 10:13:22.192674   25528 provision.go:138] copyHostCerts
	I1109 10:13:22.192720   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:13:22.192779   25528 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 10:13:22.192787   25528 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:13:22.192895   25528 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 10:13:22.193073   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:13:22.193110   25528 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 10:13:22.193115   25528 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:13:22.193182   25528 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 10:13:22.193327   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:13:22.193374   25528 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 10:13:22.193379   25528 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:13:22.193443   25528 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 10:13:22.193568   25528 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-101309 san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube ingress-addon-legacy-101309]
	I1109 10:13:22.286686   25528 provision.go:172] copyRemoteCerts
	I1109 10:13:22.286750   25528 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 10:13:22.286825   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:22.341999   25528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
	I1109 10:13:22.427060   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 10:13:22.427142   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 10:13:22.443455   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 10:13:22.443551   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I1109 10:13:22.459986   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 10:13:22.460077   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 10:13:22.476982   25528 provision.go:86] duration metric: configureAuth took 339.837004ms
	I1109 10:13:22.476995   25528 ubuntu.go:193] setting minikube options for container-runtime
	I1109 10:13:22.477154   25528 config.go:180] Loaded profile config "ingress-addon-legacy-101309": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1109 10:13:22.477232   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:22.532860   25528 main.go:134] libmachine: Using SSH client type: native
	I1109 10:13:22.533019   25528 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 61702 <nil> <nil>}
	I1109 10:13:22.533031   25528 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 10:13:22.651686   25528 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 10:13:22.651707   25528 ubuntu.go:71] root file system type: overlay
	I1109 10:13:22.651863   25528 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 10:13:22.651976   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:22.707804   25528 main.go:134] libmachine: Using SSH client type: native
	I1109 10:13:22.707965   25528 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 61702 <nil> <nil>}
	I1109 10:13:22.708018   25528 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 10:13:22.834098   25528 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 10:13:22.834205   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:22.889472   25528 main.go:134] libmachine: Using SSH client type: native
	I1109 10:13:22.889629   25528 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 61702 <nil> <nil>}
	I1109 10:13:22.889644   25528 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 10:13:23.481016   25528 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 18:13:22.836077322 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1109 10:13:23.481036   25528 machine.go:91] provisioned docker machine in 1.701530304s
	I1109 10:13:23.481044   25528 client.go:171] LocalClient.Create took 8.116748348s
	I1109 10:13:23.481062   25528 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-101309" took 8.116822621s
	I1109 10:13:23.481075   25528 start.go:300] post-start starting for "ingress-addon-legacy-101309" (driver="docker")
	I1109 10:13:23.481081   25528 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 10:13:23.481164   25528 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 10:13:23.481226   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:23.537768   25528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
	I1109 10:13:23.624306   25528 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 10:13:23.628078   25528 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 10:13:23.628094   25528 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 10:13:23.628101   25528 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 10:13:23.628111   25528 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 10:13:23.628122   25528 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 10:13:23.628224   25528 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 10:13:23.628407   25528 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 10:13:23.628413   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /etc/ssl/certs/228682.pem
	I1109 10:13:23.628627   25528 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 10:13:23.635481   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:13:23.651662   25528 start.go:303] post-start completed in 170.577695ms
	I1109 10:13:23.652241   25528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-101309
	I1109 10:13:23.710251   25528 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/config.json ...
	I1109 10:13:23.710687   25528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 10:13:23.710751   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:23.767412   25528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
	I1109 10:13:23.857158   25528 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 10:13:23.861566   25528 start.go:128] duration metric: createHost completed in 8.541420163s
	I1109 10:13:23.861583   25528 start.go:83] releasing machines lock for "ingress-addon-legacy-101309", held for 8.541615666s
	I1109 10:13:23.861691   25528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ingress-addon-legacy-101309
	I1109 10:13:23.918453   25528 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1109 10:13:23.918456   25528 ssh_runner.go:195] Run: systemctl --version
	I1109 10:13:23.918544   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:23.918551   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:23.977493   25528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
	I1109 10:13:23.979074   25528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
	I1109 10:13:24.316598   25528 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 10:13:24.326851   25528 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 10:13:24.326917   25528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 10:13:24.335827   25528 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 10:13:24.348484   25528 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 10:13:24.414340   25528 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 10:13:24.480415   25528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:13:24.544139   25528 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 10:13:24.743380   25528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:13:24.772198   25528 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:13:24.821765   25528 out.go:204] * Preparing Kubernetes v1.18.20 on Docker 20.10.20 ...
	I1109 10:13:24.821974   25528 cli_runner.go:164] Run: docker exec -t ingress-addon-legacy-101309 dig +short host.docker.internal
	I1109 10:13:24.932535   25528 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 10:13:24.932639   25528 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 10:13:24.937109   25528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:13:24.947231   25528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:13:25.005746   25528 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime docker
	I1109 10:13:25.005843   25528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 10:13:25.029107   25528 docker.go:613] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1109 10:13:25.029124   25528 docker.go:543] Images already preloaded, skipping extraction
	I1109 10:13:25.029230   25528 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 10:13:25.051926   25528 docker.go:613] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.18.20
	k8s.gcr.io/kube-apiserver:v1.18.20
	k8s.gcr.io/kube-controller-manager:v1.18.20
	k8s.gcr.io/kube-scheduler:v1.18.20
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/pause:3.2
	k8s.gcr.io/coredns:1.6.7
	k8s.gcr.io/etcd:3.4.3-0
	
	-- /stdout --
	I1109 10:13:25.051949   25528 cache_images.go:84] Images are preloaded, skipping loading
	I1109 10:13:25.052038   25528 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 10:13:25.117557   25528 cni.go:95] Creating CNI manager for ""
	I1109 10:13:25.117571   25528 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 10:13:25.117591   25528 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 10:13:25.117611   25528 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-101309 NodeName:ingress-addon-legacy-101309 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 10:13:25.117745   25528 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "ingress-addon-legacy-101309"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 10:13:25.117831   25528 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=ingress-addon-legacy-101309 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-101309 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 10:13:25.117903   25528 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I1109 10:13:25.125265   25528 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 10:13:25.125331   25528 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 10:13:25.132267   25528 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (354 bytes)
	I1109 10:13:25.144838   25528 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I1109 10:13:25.157428   25528 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2070 bytes)
	I1109 10:13:25.170204   25528 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1109 10:13:25.173777   25528 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:13:25.183744   25528 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309 for IP: 192.168.49.2
	I1109 10:13:25.183887   25528 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 10:13:25.183958   25528 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 10:13:25.184012   25528 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.key
	I1109 10:13:25.184029   25528 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.crt with IP's: []
	I1109 10:13:25.422707   25528 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.crt ...
	I1109 10:13:25.422718   25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.crt: {Name:mkdef0d2eb2470e653103bc9d5f11ae902530f8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:13:25.423085   25528 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.key ...
	I1109 10:13:25.423093   25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/client.key: {Name:mka010c6bec794b172cc3a5cd8ba54b4a128659e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:13:25.423354   25528 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key.dd3b5fb2
	I1109 10:13:25.423392   25528 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1109 10:13:25.744891   25528 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt.dd3b5fb2 ...
	I1109 10:13:25.744905   25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt.dd3b5fb2: {Name:mk2ba35356c78eeeb18d6c2a372b94de0951c370 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:13:25.745263   25528 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key.dd3b5fb2 ...
	I1109 10:13:25.745274   25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key.dd3b5fb2: {Name:mk7c4631a4ddf056c25e2d12b257eca71e02df48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:13:25.745509   25528 certs.go:320] copying /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt.dd3b5fb2 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt
	I1109 10:13:25.745674   25528 certs.go:324] copying /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key.dd3b5fb2 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key
	I1109 10:13:25.745913   25528 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.key
	I1109 10:13:25.745932   25528 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.crt with IP's: []
	I1109 10:13:25.789785   25528 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.crt ...
	I1109 10:13:25.789793   25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.crt: {Name:mke30734f39a6e47d99edf1510345a8bcda9e417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:13:25.790068   25528 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.key ...
	I1109 10:13:25.790075   25528 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.key: {Name:mkca6ffd881ccb7fa57831a0459aa74b09f8932f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:13:25.790400   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 10:13:25.790434   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 10:13:25.790458   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 10:13:25.790481   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 10:13:25.790544   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 10:13:25.790583   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 10:13:25.790623   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 10:13:25.790646   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 10:13:25.790768   25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 10:13:25.790817   25528 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 10:13:25.790829   25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 10:13:25.790909   25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 10:13:25.790941   25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 10:13:25.790974   25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 10:13:25.791086   25528 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:13:25.791125   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /usr/share/ca-certificates/228682.pem
	I1109 10:13:25.791149   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:13:25.791168   25528 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem -> /usr/share/ca-certificates/22868.pem
	I1109 10:13:25.791690   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 10:13:25.809822   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 10:13:25.826659   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 10:13:25.843445   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/ingress-addon-legacy-101309/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 10:13:25.860054   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 10:13:25.876628   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 10:13:25.893647   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 10:13:25.910194   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 10:13:25.926942   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 10:13:25.943733   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 10:13:25.960313   25528 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 10:13:25.977347   25528 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 10:13:25.989812   25528 ssh_runner.go:195] Run: openssl version
	I1109 10:13:25.994949   25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 10:13:26.002671   25528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:13:26.006604   25528 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:13:26.006653   25528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:13:26.011610   25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 10:13:26.019250   25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 10:13:26.026857   25528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 10:13:26.030625   25528 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:13:26.030676   25528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 10:13:26.035875   25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 10:13:26.043410   25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 10:13:26.051480   25528 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 10:13:26.055141   25528 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:13:26.055198   25528 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 10:13:26.060002   25528 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 10:13:26.067450   25528 kubeadm.go:396] StartCluster: {Name:ingress-addon-legacy-101309 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-101309 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:13:26.067556   25528 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 10:13:26.089010   25528 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 10:13:26.096445   25528 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 10:13:26.103223   25528 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1109 10:13:26.103296   25528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 10:13:26.110629   25528 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 10:13:26.110657   25528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 10:13:26.156695   25528 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I1109 10:13:26.156828   25528 kubeadm.go:317] [preflight] Running pre-flight checks
	I1109 10:13:26.438955   25528 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 10:13:26.439043   25528 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 10:13:26.439126   25528 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 10:13:26.649727   25528 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 10:13:26.650564   25528 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 10:13:26.650647   25528 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1109 10:13:26.720455   25528 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 10:13:26.763721   25528 out.go:204]   - Generating certificates and keys ...
	I1109 10:13:26.763823   25528 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1109 10:13:26.763888   25528 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1109 10:13:26.843268   25528 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 10:13:26.957333   25528 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1109 10:13:27.041389   25528 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1109 10:13:27.214891   25528 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1109 10:13:27.335405   25528 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1109 10:13:27.335520   25528 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 10:13:27.434594   25528 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1109 10:13:27.434698   25528 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1109 10:13:27.605531   25528 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 10:13:27.831674   25528 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 10:13:28.114407   25528 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1109 10:13:28.114686   25528 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 10:13:28.375328   25528 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 10:13:28.881927   25528 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 10:13:29.051996   25528 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 10:13:29.170501   25528 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 10:13:29.171502   25528 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 10:13:29.192631   25528 out.go:204]   - Booting up control plane ...
	I1109 10:13:29.192839   25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 10:13:29.193025   25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 10:13:29.193159   25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 10:13:29.193348   25528 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 10:13:29.193649   25528 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 10:14:09.154278   25528 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1109 10:14:09.155436   25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:14:09.155661   25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:14:14.153788   25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:14:14.153990   25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:14:24.148230   25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:14:24.148486   25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:14:44.135340   25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:14:44.135564   25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:15:24.108423   25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:15:24.108645   25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:15:24.108661   25528 kubeadm.go:317] 
	I1109 10:15:24.108699   25528 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I1109 10:15:24.108753   25528 kubeadm.go:317] 		timed out waiting for the condition
	I1109 10:15:24.108769   25528 kubeadm.go:317] 
	I1109 10:15:24.108814   25528 kubeadm.go:317] 	This error is likely caused by:
	I1109 10:15:24.108849   25528 kubeadm.go:317] 		- The kubelet is not running
	I1109 10:15:24.108968   25528 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 10:15:24.108974   25528 kubeadm.go:317] 
	I1109 10:15:24.109082   25528 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 10:15:24.109117   25528 kubeadm.go:317] 		- 'systemctl status kubelet'
	I1109 10:15:24.109147   25528 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I1109 10:15:24.109152   25528 kubeadm.go:317] 
	I1109 10:15:24.109348   25528 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 10:15:24.109533   25528 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1109 10:15:24.109573   25528 kubeadm.go:317] 
	I1109 10:15:24.109709   25528 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1109 10:15:24.109785   25528 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I1109 10:15:24.109895   25528 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I1109 10:15:24.109934   25528 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I1109 10:15:24.109941   25528 kubeadm.go:317] 
	I1109 10:15:24.113197   25528 kubeadm.go:317] W1109 18:13:26.162147     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1109 10:15:24.113268   25528 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1109 10:15:24.113371   25528 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
	I1109 10:15:24.113476   25528 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 10:15:24.113595   25528 kubeadm.go:317] W1109 18:13:29.185196     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1109 10:15:24.113762   25528 kubeadm.go:317] W1109 18:13:29.186047     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1109 10:15:24.113819   25528 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 10:15:24.113880   25528 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1109 10:15:24.114116   25528 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1109 18:13:26.162147     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 18:13:29.185196     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1109 18:13:29.186047     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-101309 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1109 18:13:26.162147     958 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 18:13:29.185196     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1109 18:13:29.186047     958 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1109 10:15:24.114147   25528 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1109 10:15:24.529235   25528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 10:15:24.538622   25528 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1109 10:15:24.538683   25528 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 10:15:24.545953   25528 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 10:15:24.545973   25528 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 10:15:24.592545   25528 kubeadm.go:317] [init] Using Kubernetes version: v1.18.20
	I1109 10:15:24.592605   25528 kubeadm.go:317] [preflight] Running pre-flight checks
	I1109 10:15:24.871613   25528 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 10:15:24.871710   25528 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 10:15:24.871789   25528 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 10:15:25.086220   25528 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 10:15:25.087441   25528 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 10:15:25.087506   25528 kubeadm.go:317] [kubelet-start] Starting the kubelet
	I1109 10:15:25.153322   25528 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 10:15:25.174791   25528 out.go:204]   - Generating certificates and keys ...
	I1109 10:15:25.174858   25528 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1109 10:15:25.174940   25528 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1109 10:15:25.175018   25528 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1109 10:15:25.175071   25528 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1109 10:15:25.175145   25528 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1109 10:15:25.175205   25528 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1109 10:15:25.175265   25528 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1109 10:15:25.175319   25528 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1109 10:15:25.175382   25528 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1109 10:15:25.175432   25528 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1109 10:15:25.175472   25528 kubeadm.go:317] [certs] Using the existing "sa" key
	I1109 10:15:25.175535   25528 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 10:15:25.261196   25528 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 10:15:25.429331   25528 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 10:15:25.695228   25528 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 10:15:25.807505   25528 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 10:15:25.807998   25528 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 10:15:25.829717   25528 out.go:204]   - Booting up control plane ...
	I1109 10:15:25.829984   25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 10:15:25.830154   25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 10:15:25.830274   25528 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 10:15:25.830392   25528 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 10:15:25.830677   25528 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 10:16:05.790596   25528 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1109 10:16:05.791561   25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:16:05.791774   25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:16:10.790081   25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:16:10.790339   25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:16:20.784934   25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:16:20.785154   25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:16:40.772411   25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:16:40.772638   25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:17:20.745476   25528 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:17:20.745686   25528 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:17:20.745701   25528 kubeadm.go:317] 
	I1109 10:17:20.745751   25528 kubeadm.go:317] 	Unfortunately, an error has occurred:
	I1109 10:17:20.745798   25528 kubeadm.go:317] 		timed out waiting for the condition
	I1109 10:17:20.745804   25528 kubeadm.go:317] 
	I1109 10:17:20.745841   25528 kubeadm.go:317] 	This error is likely caused by:
	I1109 10:17:20.745890   25528 kubeadm.go:317] 		- The kubelet is not running
	I1109 10:17:20.745998   25528 kubeadm.go:317] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 10:17:20.746006   25528 kubeadm.go:317] 
	I1109 10:17:20.746116   25528 kubeadm.go:317] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 10:17:20.746159   25528 kubeadm.go:317] 		- 'systemctl status kubelet'
	I1109 10:17:20.746191   25528 kubeadm.go:317] 		- 'journalctl -xeu kubelet'
	I1109 10:17:20.746198   25528 kubeadm.go:317] 
	I1109 10:17:20.746300   25528 kubeadm.go:317] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 10:17:20.746379   25528 kubeadm.go:317] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1109 10:17:20.746387   25528 kubeadm.go:317] 
	I1109 10:17:20.746498   25528 kubeadm.go:317] 	Here is one example how you may list all Kubernetes containers running in docker:
	I1109 10:17:20.746551   25528 kubeadm.go:317] 		- 'docker ps -a | grep kube | grep -v pause'
	I1109 10:17:20.746618   25528 kubeadm.go:317] 		Once you have found the failing container, you can inspect its logs with:
	I1109 10:17:20.746651   25528 kubeadm.go:317] 		- 'docker logs CONTAINERID'
	I1109 10:17:20.746657   25528 kubeadm.go:317] 
	I1109 10:17:20.748946   25528 kubeadm.go:317] W1109 18:15:24.596951    3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I1109 10:17:20.749028   25528 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1109 10:17:20.749149   25528 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
	I1109 10:17:20.749225   25528 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 10:17:20.749345   25528 kubeadm.go:317] W1109 18:15:25.817773    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1109 10:17:20.749432   25528 kubeadm.go:317] W1109 18:15:25.818787    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I1109 10:17:20.749500   25528 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 10:17:20.749557   25528 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1109 10:17:20.749590   25528 kubeadm.go:398] StartCluster complete in 3m54.682054126s
	I1109 10:17:20.749689   25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 10:17:20.772328   25528 logs.go:274] 0 containers: []
	W1109 10:17:20.772341   25528 logs.go:276] No container was found matching "kube-apiserver"
	I1109 10:17:20.772427   25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 10:17:20.794407   25528 logs.go:274] 0 containers: []
	W1109 10:17:20.794418   25528 logs.go:276] No container was found matching "etcd"
	I1109 10:17:20.794502   25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 10:17:20.817326   25528 logs.go:274] 0 containers: []
	W1109 10:17:20.817337   25528 logs.go:276] No container was found matching "coredns"
	I1109 10:17:20.817421   25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 10:17:20.844735   25528 logs.go:274] 0 containers: []
	W1109 10:17:20.844746   25528 logs.go:276] No container was found matching "kube-scheduler"
	I1109 10:17:20.844824   25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 10:17:20.866441   25528 logs.go:274] 0 containers: []
	W1109 10:17:20.866453   25528 logs.go:276] No container was found matching "kube-proxy"
	I1109 10:17:20.866535   25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 10:17:20.888238   25528 logs.go:274] 0 containers: []
	W1109 10:17:20.888249   25528 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 10:17:20.888334   25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 10:17:20.909202   25528 logs.go:274] 0 containers: []
	W1109 10:17:20.909214   25528 logs.go:276] No container was found matching "storage-provisioner"
	I1109 10:17:20.909298   25528 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 10:17:20.930795   25528 logs.go:274] 0 containers: []
	W1109 10:17:20.930811   25528 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 10:17:20.930819   25528 logs.go:123] Gathering logs for dmesg ...
	I1109 10:17:20.930826   25528 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 10:17:20.943516   25528 logs.go:123] Gathering logs for describe nodes ...
	I1109 10:17:20.943532   25528 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 10:17:20.996952   25528 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 10:17:20.996962   25528 logs.go:123] Gathering logs for Docker ...
	I1109 10:17:20.996969   25528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 10:17:21.012476   25528 logs.go:123] Gathering logs for container status ...
	I1109 10:17:21.012489   25528 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 10:17:23.062097   25528 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049596062s)
	I1109 10:17:23.062271   25528 logs.go:123] Gathering logs for kubelet ...
	I1109 10:17:23.062281   25528 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1109 10:17:23.100608   25528 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1109 18:15:24.596951    3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 18:15:25.817773    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1109 18:15:25.818787    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1109 10:17:23.100628   25528 out.go:239] * 
	* 
	W1109 10:17:23.100748   25528 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1109 18:15:24.596951    3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 18:15:25.817773    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1109 18:15:25.818787    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1109 18:15:24.596951    3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 18:15:25.817773    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1109 18:15:25.818787    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 10:17:23.100766   25528 out.go:239] * 
	* 
	W1109 10:17:23.101406   25528 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 10:17:23.166357   25528 out.go:177] 
	W1109 10:17:23.209319   25528 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1109 18:15:24.596951    3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 18:15:25.817773    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1109 18:15:25.818787    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.18.20
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in docker:
			- 'docker ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'docker logs CONTAINERID'
	
	
	stderr:
	W1109 18:15:24.596951    3444 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 19.03
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	W1109 18:15:25.817773    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	W1109 18:15:25.818787    3444 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 10:17:23.209492   25528 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1109 10:17:23.209559   25528 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1109 10:17:23.231281   25528 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:41: failed to start minikube with args: "out/minikube-darwin-amd64 start -p ingress-addon-legacy-101309 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=docker " : exit status 109
--- FAIL: TestIngressAddonLegacy/StartLegacyK8sCluster (254.28s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.57s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-101309 addons enable ingress --alsologtostderr -v=5
E1109 10:17:26.249031   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:18:07.211379   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
ingress_addon_legacy_test.go:70: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-101309 addons enable ingress --alsologtostderr -v=5: exit status 10 (1m29.124233967s)

                                                
                                                
-- stdout --
	* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	  - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	* Verifying ingress addon...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 10:17:23.381896   25859 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:17:23.383034   25859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:17:23.383043   25859 out.go:309] Setting ErrFile to fd 2...
	I1109 10:17:23.383047   25859 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:17:23.383165   25859 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:17:23.404403   25859 out.go:177] * ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1109 10:17:23.425855   25859 config.go:180] Loaded profile config "ingress-addon-legacy-101309": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1109 10:17:23.425886   25859 addons.go:65] Setting ingress=true in profile "ingress-addon-legacy-101309"
	I1109 10:17:23.425900   25859 addons.go:227] Setting addon ingress=true in "ingress-addon-legacy-101309"
	I1109 10:17:23.426472   25859 host.go:66] Checking if "ingress-addon-legacy-101309" exists ...
	I1109 10:17:23.427467   25859 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-101309 --format={{.State.Status}}
	I1109 10:17:23.505286   25859 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1109 10:17:23.526160   25859 out.go:177]   - Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
	I1109 10:17:23.547033   25859 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1109 10:17:23.568207   25859 out.go:177]   - Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
	I1109 10:17:23.589527   25859 addons.go:419] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1109 10:17:23.589563   25859 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (15118 bytes)
	I1109 10:17:23.589734   25859 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:17:23.647370   25859 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
	I1109 10:17:23.736724   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:23.787260   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:23.787282   25859 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:24.063742   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:24.115368   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:24.115385   25859 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:24.657731   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:24.710558   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:24.710572   25859 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:25.366822   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:25.419328   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:25.419349   25859 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:26.212826   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:26.268525   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:26.268541   25859 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:27.439626   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:27.491236   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:27.491252   25859 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:29.746675   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:29.802098   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:29.802112   25859 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:31.415157   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:31.468891   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:31.468909   25859 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:34.275351   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:34.328241   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:34.328258   25859 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:38.155538   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:38.210008   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:38.210029   25859 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:45.909785   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:17:45.963874   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:17:45.963889   25859 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:00.600581   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:18:00.654621   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:00.654635   25859 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:29.061540   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:18:29.112938   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:29.112954   25859 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:52.281448   25859 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	W1109 10:18:52.333575   25859 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:52.333603   25859 addons.go:457] Verifying addon ingress=true in "ingress-addon-legacy-101309"
	I1109 10:18:52.355462   25859 out.go:177] * Verifying ingress addon...
	I1109 10:18:52.378632   25859 out.go:177] 
	W1109 10:18:52.400254   25859 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-101309" does not exist: client config: context "ingress-addon-legacy-101309" does not exist]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 get kube-client to validate ingress addon: client config: context "ingress-addon-legacy-101309" does not exist: client config: context "ingress-addon-legacy-101309" does not exist]
	W1109 10:18:52.400282   25859 out.go:239] * 
	* 
	W1109 10:18:52.406192   25859 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_ecab7b1157b569c129811d3c2b680fbca2a6f3d2_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 10:18:52.427323   25859 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:71: failed to enable ingress addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-101309
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-101309:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc",
	        "Created": "2022-11-09T18:13:20.893548707Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T18:13:21.179329027Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/hostname",
	        "HostsPath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/hosts",
	        "LogPath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc-json.log",
	        "Name": "/ingress-addon-legacy-101309",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-101309:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-101309",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-101309",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-101309/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-101309",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-101309",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-101309",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6ba363317b9ef0055dc7409d218fea28c156d256b12a1788291ed0eaae665bb9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61702"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61703"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61699"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61700"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61701"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6ba363317b9e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-101309": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4177326f428c",
	                        "ingress-addon-legacy-101309"
	                    ],
	                    "NetworkID": "6f252337b29e2f0cbd9cf8a51a00540e04f9eb83e0a41a38678fafa5eb5b7ae1",
	                    "EndpointID": "c2990976f9a3da549534fa73dee0d50e5f28bd9f4fee464fd17899c7127f09a1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-101309 -n ingress-addon-legacy-101309
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-101309 -n ingress-addon-legacy-101309: exit status 6 (390.427669ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 10:18:52.890984   25949 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-101309" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-101309" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (89.57s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.54s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-darwin-amd64 -p ingress-addon-legacy-101309 addons enable ingress-dns --alsologtostderr -v=5
E1109 10:19:29.133792   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
ingress_addon_legacy_test.go:79: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ingress-addon-legacy-101309 addons enable ingress-dns --alsologtostderr -v=5: exit status 10 (1m29.075097749s)

                                                
                                                
-- stdout --
	* ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	* After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	  - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 10:18:52.956785   25959 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:18:52.957633   25959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:18:52.957640   25959 out.go:309] Setting ErrFile to fd 2...
	I1109 10:18:52.957644   25959 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:18:52.957762   25959 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:18:52.979292   25959 out.go:177] * ingress-dns is an addon maintained by Google. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	I1109 10:18:53.000974   25959 config.go:180] Loaded profile config "ingress-addon-legacy-101309": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.18.20
	I1109 10:18:53.001002   25959 addons.go:65] Setting ingress-dns=true in profile "ingress-addon-legacy-101309"
	I1109 10:18:53.001013   25959 addons.go:227] Setting addon ingress-dns=true in "ingress-addon-legacy-101309"
	I1109 10:18:53.001364   25959 host.go:66] Checking if "ingress-addon-legacy-101309" exists ...
	I1109 10:18:53.002122   25959 cli_runner.go:164] Run: docker container inspect ingress-addon-legacy-101309 --format={{.State.Status}}
	I1109 10:18:53.080288   25959 out.go:177] * After the addon is enabled, please run "minikube tunnel" and your ingress resources would be available at "127.0.0.1"
	I1109 10:18:53.101932   25959 out.go:177]   - Using image cryptexlabs/minikube-ingress-dns:0.3.0
	I1109 10:18:53.124063   25959 addons.go:419] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1109 10:18:53.124105   25959 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2434 bytes)
	I1109 10:18:53.124303   25959 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ingress-addon-legacy-101309
	I1109 10:18:53.181435   25959 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:61702 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/ingress-addon-legacy-101309/id_rsa Username:docker}
	I1109 10:18:53.273099   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:18:53.322886   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:53.322907   25959 retry.go:31] will retry after 276.165072ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:53.599216   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:18:53.654382   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:53.654398   25959 retry.go:31] will retry after 540.190908ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:54.195989   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:18:54.251470   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:54.251495   25959 retry.go:31] will retry after 655.06503ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:54.908865   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:18:54.967210   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:54.967226   25959 retry.go:31] will retry after 791.196345ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:55.759767   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:18:55.812850   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:55.812863   25959 retry.go:31] will retry after 1.170244332s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:56.983996   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:18:57.037518   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:57.037533   25959 retry.go:31] will retry after 2.253109428s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:59.292957   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:18:59.344941   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:18:59.344955   25959 retry.go:31] will retry after 1.610739793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:00.957968   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:19:01.012859   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:01.012873   25959 retry.go:31] will retry after 2.804311738s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:03.819326   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:19:03.872967   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:03.872982   25959 retry.go:31] will retry after 3.824918958s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:07.700196   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:19:07.753938   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:07.753951   25959 retry.go:31] will retry after 7.69743562s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:15.451945   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:19:15.504527   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:15.504540   25959 retry.go:31] will retry after 14.635568968s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:30.142406   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:19:30.198529   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:30.198550   25959 retry.go:31] will retry after 28.406662371s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:58.607542   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:19:58.660284   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:19:58.660300   25959 retry.go:31] will retry after 23.168280436s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:20:21.830906   25959 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	W1109 10:20:21.884088   25959 addons.go:440] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	I1109 10:20:21.906352   25959 out.go:177] 
	W1109 10:20:21.926765   25959 out.go:239] X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	W1109 10:20:21.926789   25959 out.go:239] * 
	* 
	W1109 10:20:21.932655   25959 out.go:239] ╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_26091442b04c5e26589fdfa18b5031c2ff11dd6b_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 10:20:21.953452   25959 out.go:177] 

                                                
                                                
** /stderr **
ingress_addon_legacy_test.go:80: failed to enable ingress-dns addon: exit status 10
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-101309
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-101309:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc",
	        "Created": "2022-11-09T18:13:20.893548707Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T18:13:21.179329027Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/hostname",
	        "HostsPath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/hosts",
	        "LogPath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc-json.log",
	        "Name": "/ingress-addon-legacy-101309",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-101309:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-101309",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-101309",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-101309/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-101309",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-101309",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-101309",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6ba363317b9ef0055dc7409d218fea28c156d256b12a1788291ed0eaae665bb9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61702"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61703"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61699"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61700"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61701"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6ba363317b9e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-101309": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4177326f428c",
	                        "ingress-addon-legacy-101309"
	                    ],
	                    "NetworkID": "6f252337b29e2f0cbd9cf8a51a00540e04f9eb83e0a41a38678fafa5eb5b7ae1",
	                    "EndpointID": "c2990976f9a3da549534fa73dee0d50e5f28bd9f4fee464fd17899c7127f09a1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-101309 -n ingress-addon-legacy-101309
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-101309 -n ingress-addon-legacy-101309: exit status 6 (401.870461ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 10:20:22.428116   26043 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-101309" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-101309" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (89.54s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:159: failed to get Kubernetes client: <nil>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ingress-addon-legacy-101309
helpers_test.go:235: (dbg) docker inspect ingress-addon-legacy-101309:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc",
	        "Created": "2022-11-09T18:13:20.893548707Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 40232,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T18:13:21.179329027Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/hostname",
	        "HostsPath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/hosts",
	        "LogPath": "/var/lib/docker/containers/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc/4177326f428caa8a05c360bce74c34f3363ba07ce4e597917175749a49da7acc-json.log",
	        "Name": "/ingress-addon-legacy-101309",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "ingress-addon-legacy-101309:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ingress-addon-legacy-101309",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/9b3b4b600bb8261bbf2b7702087e4c9887d6c516fd952c8ef0887a332d9917ec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ingress-addon-legacy-101309",
	                "Source": "/var/lib/docker/volumes/ingress-addon-legacy-101309/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ingress-addon-legacy-101309",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ingress-addon-legacy-101309",
	                "name.minikube.sigs.k8s.io": "ingress-addon-legacy-101309",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6ba363317b9ef0055dc7409d218fea28c156d256b12a1788291ed0eaae665bb9",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61702"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61703"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61699"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61700"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "61701"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/6ba363317b9e",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ingress-addon-legacy-101309": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "4177326f428c",
	                        "ingress-addon-legacy-101309"
	                    ],
	                    "NetworkID": "6f252337b29e2f0cbd9cf8a51a00540e04f9eb83e0a41a38678fafa5eb5b7ae1",
	                    "EndpointID": "c2990976f9a3da549534fa73dee0d50e5f28bd9f4fee464fd17899c7127f09a1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-101309 -n ingress-addon-legacy-101309
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ingress-addon-legacy-101309 -n ingress-addon-legacy-101309: exit status 6 (391.709298ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 10:20:22.879453   26055 status.go:415] kubeconfig endpoint: extract IP: "ingress-addon-legacy-101309" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "ingress-addon-legacy-101309" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (217.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:342: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:352: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-102528 --wait=true -v=8 --alsologtostderr --driver=docker 
E1109 10:30:56.438485   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:31:45.183476   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:33:08.242921   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
multinode_test.go:352: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-102528 --wait=true -v=8 --alsologtostderr --driver=docker : exit status 80 (3m32.571700335s)

                                                
                                                
-- stdout --
	* [multinode-102528] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node multinode-102528 in cluster multinode-102528
	* Pulling base image ...
	* Restarting existing docker container for "multinode-102528" ...
	* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	* Starting worker node multinode-102528-m02 in cluster multinode-102528
	* Pulling base image ...
	* Restarting existing docker container for "multinode-102528-m02" ...
	* Found network options:
	  - NO_PROXY=192.168.58.2
	* Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	  - env NO_PROXY=192.168.58.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 10:30:48.536912   29322 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:30:48.537183   29322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:30:48.537188   29322 out.go:309] Setting ErrFile to fd 2...
	I1109 10:30:48.537192   29322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:30:48.537317   29322 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:30:48.537818   29322 out.go:303] Setting JSON to false
	I1109 10:30:48.556746   29322 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":12623,"bootTime":1668006025,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 10:30:48.556849   29322 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 10:30:48.578543   29322 out.go:177] * [multinode-102528] minikube v1.28.0 on Darwin 13.0
	I1109 10:30:48.622116   29322 notify.go:220] Checking for updates...
	I1109 10:30:48.644206   29322 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 10:30:48.666203   29322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:30:48.688126   29322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 10:30:48.710406   29322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 10:30:48.732385   29322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 10:30:48.754842   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:30:48.755501   29322 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 10:30:48.823234   29322 docker.go:137] docker version: linux-20.10.20
	I1109 10:30:48.823401   29322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:30:48.963279   29322 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-09 18:30:48.873497036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:30:49.006881   29322 out.go:177] * Using the docker driver based on existing profile
	I1109 10:30:49.028713   29322 start.go:282] selected driver: docker
	I1109 10:30:49.028740   29322 start.go:808] validating driver "docker" against &{Name:multinode-102528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:30:49.028965   29322 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 10:30:49.029221   29322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:30:49.170716   29322 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-09 18:30:49.082217702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:30:49.173190   29322 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 10:30:49.173219   29322 cni.go:95] Creating CNI manager for ""
	I1109 10:30:49.173226   29322 cni.go:156] 2 nodes found, recommending kindnet
	I1109 10:30:49.173246   29322 start_flags.go:317] config:
	{Name:multinode-102528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-cr
eds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:30:49.216988   29322 out.go:177] * Starting control plane node multinode-102528 in cluster multinode-102528
	I1109 10:30:49.239627   29322 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 10:30:49.260796   29322 out.go:177] * Pulling base image ...
	I1109 10:30:49.302794   29322 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 10:30:49.302849   29322 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 10:30:49.302891   29322 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1109 10:30:49.302912   29322 cache.go:57] Caching tarball of preloaded images
	I1109 10:30:49.303179   29322 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 10:30:49.303197   29322 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1109 10:30:49.304196   29322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/config.json ...
	I1109 10:30:49.360300   29322 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 10:30:49.360318   29322 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 10:30:49.360327   29322 cache.go:208] Successfully downloaded all kic artifacts
	I1109 10:30:49.360391   29322 start.go:364] acquiring machines lock for multinode-102528: {Name:mk70f613f7d58abdd1a6ac3ac877e9dff914f556 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 10:30:49.360512   29322 start.go:368] acquired machines lock for "multinode-102528" in 100.317µs
	I1109 10:30:49.360540   29322 start.go:96] Skipping create...Using existing machine configuration
	I1109 10:30:49.360552   29322 fix.go:55] fixHost starting: 
	I1109 10:30:49.360816   29322 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:30:49.417463   29322 fix.go:103] recreateIfNeeded on multinode-102528: state=Stopped err=<nil>
	W1109 10:30:49.417502   29322 fix.go:129] unexpected machine state, will restart: <nil>
	I1109 10:30:49.461188   29322 out.go:177] * Restarting existing docker container for "multinode-102528" ...
	I1109 10:30:49.482191   29322 cli_runner.go:164] Run: docker start multinode-102528
	I1109 10:30:49.807720   29322 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:30:49.865305   29322 kic.go:415] container "multinode-102528" state is running.
	I1109 10:30:49.865878   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528
	I1109 10:30:49.925450   29322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/config.json ...
	I1109 10:30:49.925849   29322 machine.go:88] provisioning docker machine ...
	I1109 10:30:49.925874   29322 ubuntu.go:169] provisioning hostname "multinode-102528"
	I1109 10:30:49.925958   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:49.985024   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:30:49.985247   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1109 10:30:49.985264   29322 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-102528 && echo "multinode-102528" | sudo tee /etc/hostname
	I1109 10:30:50.117994   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-102528
	
	I1109 10:30:50.118091   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:50.178996   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:30:50.179161   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1109 10:30:50.179173   29322 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-102528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-102528/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-102528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 10:30:50.292940   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:30:50.292966   29322 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 10:30:50.292984   29322 ubuntu.go:177] setting up certificates
	I1109 10:30:50.292994   29322 provision.go:83] configureAuth start
	I1109 10:30:50.293104   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528
	I1109 10:30:50.350556   29322 provision.go:138] copyHostCerts
	I1109 10:30:50.350615   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:30:50.350692   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 10:30:50.350701   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:30:50.350805   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 10:30:50.350994   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:30:50.351037   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 10:30:50.351050   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:30:50.351117   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 10:30:50.351241   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:30:50.351279   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 10:30:50.351284   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:30:50.351352   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 10:30:50.351484   29322 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.multinode-102528 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-102528]
	I1109 10:30:50.446600   29322 provision.go:172] copyRemoteCerts
	I1109 10:30:50.446689   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 10:30:50.446755   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:50.503602   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:30:50.588602   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 10:30:50.588707   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 10:30:50.605496   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 10:30:50.605594   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1109 10:30:50.622903   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 10:30:50.623010   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 10:30:50.641623   29322 provision.go:86] duration metric: configureAuth took 348.61787ms
	I1109 10:30:50.641638   29322 ubuntu.go:193] setting minikube options for container-runtime
	I1109 10:30:50.641832   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:30:50.641917   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:50.700165   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:30:50.700354   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1109 10:30:50.700368   29322 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 10:30:50.815507   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 10:30:50.815526   29322 ubuntu.go:71] root file system type: overlay
	I1109 10:30:50.815705   29322 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 10:30:50.815814   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:50.874717   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:30:50.874871   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1109 10:30:50.874921   29322 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 10:30:51.002525   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 10:30:51.002640   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:51.059906   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:30:51.060083   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1109 10:30:51.060097   29322 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 10:30:51.184147   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:30:51.184171   29322 machine.go:91] provisioned docker machine in 1.258338301s
	I1109 10:30:51.184181   29322 start.go:300] post-start starting for "multinode-102528" (driver="docker")
	I1109 10:30:51.184187   29322 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 10:30:51.184256   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 10:30:51.184316   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:51.239599   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:30:51.327949   29322 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 10:30:51.331498   29322 command_runner.go:130] > NAME="Ubuntu"
	I1109 10:30:51.331509   29322 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1109 10:30:51.331513   29322 command_runner.go:130] > ID=ubuntu
	I1109 10:30:51.331520   29322 command_runner.go:130] > ID_LIKE=debian
	I1109 10:30:51.331527   29322 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1109 10:30:51.331531   29322 command_runner.go:130] > VERSION_ID="20.04"
	I1109 10:30:51.331546   29322 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1109 10:30:51.331551   29322 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1109 10:30:51.331555   29322 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1109 10:30:51.331565   29322 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1109 10:30:51.331570   29322 command_runner.go:130] > VERSION_CODENAME=focal
	I1109 10:30:51.331575   29322 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1109 10:30:51.331805   29322 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 10:30:51.331820   29322 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 10:30:51.331827   29322 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 10:30:51.331832   29322 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 10:30:51.331841   29322 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 10:30:51.331947   29322 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 10:30:51.332131   29322 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 10:30:51.332137   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /etc/ssl/certs/228682.pem
	I1109 10:30:51.332341   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 10:30:51.339472   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:30:51.357529   29322 start.go:303] post-start completed in 173.343278ms
	I1109 10:30:51.357615   29322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 10:30:51.357681   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:51.413026   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:30:51.502361   29322 command_runner.go:130] > 6%
	I1109 10:30:51.502444   29322 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 10:30:51.506640   29322 command_runner.go:130] > 99G
	I1109 10:30:51.507026   29322 fix.go:57] fixHost completed within 2.14652871s
	I1109 10:30:51.507037   29322 start.go:83] releasing machines lock for "multinode-102528", held for 2.146573291s
	I1109 10:30:51.507144   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528
	I1109 10:30:51.564543   29322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 10:30:51.564549   29322 ssh_runner.go:195] Run: systemctl --version
	I1109 10:30:51.564626   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:51.564630   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:51.623344   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:30:51.624355   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:30:51.763408   29322 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1109 10:30:51.763503   29322 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I1109 10:30:51.763529   29322 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I1109 10:30:51.763680   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1109 10:30:51.771170   29322 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1109 10:30:51.783663   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:30:51.848068   29322 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1109 10:30:51.930568   29322 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 10:30:51.939822   29322 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1109 10:30:51.940003   29322 command_runner.go:130] > [Unit]
	I1109 10:30:51.940013   29322 command_runner.go:130] > Description=Docker Application Container Engine
	I1109 10:30:51.940018   29322 command_runner.go:130] > Documentation=https://docs.docker.com
	I1109 10:30:51.940022   29322 command_runner.go:130] > BindsTo=containerd.service
	I1109 10:30:51.940027   29322 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1109 10:30:51.940031   29322 command_runner.go:130] > Wants=network-online.target
	I1109 10:30:51.940035   29322 command_runner.go:130] > Requires=docker.socket
	I1109 10:30:51.940039   29322 command_runner.go:130] > StartLimitBurst=3
	I1109 10:30:51.940043   29322 command_runner.go:130] > StartLimitIntervalSec=60
	I1109 10:30:51.940077   29322 command_runner.go:130] > [Service]
	I1109 10:30:51.940087   29322 command_runner.go:130] > Type=notify
	I1109 10:30:51.940091   29322 command_runner.go:130] > Restart=on-failure
	I1109 10:30:51.940097   29322 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1109 10:30:51.940103   29322 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1109 10:30:51.940109   29322 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1109 10:30:51.940115   29322 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1109 10:30:51.940120   29322 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1109 10:30:51.940130   29322 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1109 10:30:51.940137   29322 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1109 10:30:51.940160   29322 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1109 10:30:51.940167   29322 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1109 10:30:51.940170   29322 command_runner.go:130] > ExecStart=
	I1109 10:30:51.940182   29322 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1109 10:30:51.940187   29322 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1109 10:30:51.940192   29322 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1109 10:30:51.940198   29322 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1109 10:30:51.940201   29322 command_runner.go:130] > LimitNOFILE=infinity
	I1109 10:30:51.940205   29322 command_runner.go:130] > LimitNPROC=infinity
	I1109 10:30:51.940213   29322 command_runner.go:130] > LimitCORE=infinity
	I1109 10:30:51.940219   29322 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1109 10:30:51.940223   29322 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1109 10:30:51.940226   29322 command_runner.go:130] > TasksMax=infinity
	I1109 10:30:51.940230   29322 command_runner.go:130] > TimeoutStartSec=0
	I1109 10:30:51.940236   29322 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1109 10:30:51.940239   29322 command_runner.go:130] > Delegate=yes
	I1109 10:30:51.940245   29322 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1109 10:30:51.940249   29322 command_runner.go:130] > KillMode=process
	I1109 10:30:51.940256   29322 command_runner.go:130] > [Install]
	I1109 10:30:51.940260   29322 command_runner.go:130] > WantedBy=multi-user.target
	I1109 10:30:51.940720   29322 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 10:30:51.940788   29322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 10:30:51.950281   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 10:30:51.962058   29322 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1109 10:30:51.962069   29322 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1109 10:30:51.963145   29322 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 10:30:52.027652   29322 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 10:30:52.094006   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:30:52.162177   29322 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 10:30:52.421239   29322 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 10:30:52.485419   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:30:52.553168   29322 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1109 10:30:52.562393   29322 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 10:30:52.562477   29322 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 10:30:52.566121   29322 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1109 10:30:52.566130   29322 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1109 10:30:52.566135   29322 command_runner.go:130] > Device: 97h/151d	Inode: 118         Links: 1
	I1109 10:30:52.566140   29322 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1109 10:30:52.566148   29322 command_runner.go:130] > Access: 2022-11-09 18:30:51.859134305 +0000
	I1109 10:30:52.566158   29322 command_runner.go:130] > Modify: 2022-11-09 18:30:51.859134305 +0000
	I1109 10:30:52.566165   29322 command_runner.go:130] > Change: 2022-11-09 18:30:51.860134306 +0000
	I1109 10:30:52.566169   29322 command_runner.go:130] >  Birth: -
	I1109 10:30:52.566251   29322 start.go:472] Will wait 60s for crictl version
	I1109 10:30:52.566293   29322 ssh_runner.go:195] Run: sudo crictl version
	I1109 10:30:52.593401   29322 command_runner.go:130] > Version:  0.1.0
	I1109 10:30:52.593412   29322 command_runner.go:130] > RuntimeName:  docker
	I1109 10:30:52.593416   29322 command_runner.go:130] > RuntimeVersion:  20.10.20
	I1109 10:30:52.593420   29322 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1109 10:30:52.595533   29322 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1109 10:30:52.595625   29322 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:30:52.622062   29322 command_runner.go:130] > 20.10.20
	I1109 10:30:52.624554   29322 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:30:52.649810   29322 command_runner.go:130] > 20.10.20
	I1109 10:30:52.698010   29322 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1109 10:30:52.698245   29322 cli_runner.go:164] Run: docker exec -t multinode-102528 dig +short host.docker.internal
	I1109 10:30:52.811910   29322 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 10:30:52.812039   29322 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 10:30:52.816280   29322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:30:52.826102   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:52.883457   29322 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 10:30:52.883562   29322 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 10:30:52.905966   29322 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1109 10:30:52.905982   29322 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1109 10:30:52.905987   29322 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1109 10:30:52.905993   29322 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1109 10:30:52.905999   29322 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1109 10:30:52.906003   29322 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1109 10:30:52.906007   29322 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1109 10:30:52.906021   29322 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1109 10:30:52.906026   29322 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1109 10:30:52.906030   29322 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 10:30:52.906033   29322 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1109 10:30:52.908131   29322 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1109 10:30:52.908148   29322 docker.go:543] Images already preloaded, skipping extraction
	I1109 10:30:52.908275   29322 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 10:30:52.928705   29322 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1109 10:30:52.928717   29322 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1109 10:30:52.928721   29322 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1109 10:30:52.928725   29322 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1109 10:30:52.928729   29322 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1109 10:30:52.928734   29322 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1109 10:30:52.928739   29322 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1109 10:30:52.928746   29322 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1109 10:30:52.928751   29322 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1109 10:30:52.928763   29322 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 10:30:52.928771   29322 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1109 10:30:52.931544   29322 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1109 10:30:52.931565   29322 cache_images.go:84] Images are preloaded, skipping loading
	I1109 10:30:52.931666   29322 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 10:30:52.996570   29322 command_runner.go:130] > systemd
	I1109 10:30:52.999107   29322 cni.go:95] Creating CNI manager for ""
	I1109 10:30:52.999123   29322 cni.go:156] 2 nodes found, recommending kindnet
	I1109 10:30:52.999142   29322 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 10:30:52.999158   29322 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-102528 NodeName:multinode-102528 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 10:30:52.999268   29322 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-102528"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 10:30:52.999350   29322 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-102528 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 10:30:52.999422   29322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1109 10:30:53.006296   29322 command_runner.go:130] > kubeadm
	I1109 10:30:53.006305   29322 command_runner.go:130] > kubectl
	I1109 10:30:53.006308   29322 command_runner.go:130] > kubelet
	I1109 10:30:53.007208   29322 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 10:30:53.007270   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 10:30:53.014293   29322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (478 bytes)
	I1109 10:30:53.026823   29322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 10:30:53.039878   29322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2038 bytes)
	I1109 10:30:53.052761   29322 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1109 10:30:53.056565   29322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:30:53.065978   29322 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528 for IP: 192.168.58.2
	I1109 10:30:53.066104   29322 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 10:30:53.066172   29322 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 10:30:53.066273   29322 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.key
	I1109 10:30:53.066347   29322 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/apiserver.key.cee25041
	I1109 10:30:53.066409   29322 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/proxy-client.key
	I1109 10:30:53.066418   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 10:30:53.066454   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 10:30:53.066482   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 10:30:53.066503   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 10:30:53.066525   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 10:30:53.066546   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 10:30:53.066565   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 10:30:53.066587   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 10:30:53.066693   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 10:30:53.066738   29322 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 10:30:53.066750   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 10:30:53.066785   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 10:30:53.066820   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 10:30:53.066852   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 10:30:53.066929   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:30:53.066959   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:30:53.066985   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem -> /usr/share/ca-certificates/22868.pem
	I1109 10:30:53.067007   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /usr/share/ca-certificates/228682.pem
	I1109 10:30:53.067498   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 10:30:53.084738   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 10:30:53.101565   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 10:30:53.118587   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 10:30:53.135860   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 10:30:53.152588   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 10:30:53.169226   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 10:30:53.185584   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 10:30:53.202725   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 10:30:53.219684   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 10:30:53.237422   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 10:30:53.253645   29322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 10:30:53.265820   29322 ssh_runner.go:195] Run: openssl version
	I1109 10:30:53.270891   29322 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1109 10:30:53.271122   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 10:30:53.279178   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 10:30:53.283223   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:30:53.283402   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:30:53.283450   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 10:30:53.288370   29322 command_runner.go:130] > 3ec20f2e
	I1109 10:30:53.288756   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 10:30:53.295935   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 10:30:53.303922   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:30:53.307692   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:30:53.307798   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:30:53.307852   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:30:53.312739   29322 command_runner.go:130] > b5213941
	I1109 10:30:53.313074   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 10:30:53.320042   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 10:30:53.327852   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 10:30:53.331477   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:30:53.331623   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:30:53.331673   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 10:30:53.336701   29322 command_runner.go:130] > 51391683
	I1109 10:30:53.337061   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 10:30:53.344474   29322 kubeadm.go:396] StartCluster: {Name:multinode-102528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false porta
iner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:30:53.344606   29322 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 10:30:53.366508   29322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 10:30:53.373544   29322 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1109 10:30:53.373554   29322 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1109 10:30:53.373558   29322 command_runner.go:130] > /var/lib/minikube/etcd:
	I1109 10:30:53.373562   29322 command_runner.go:130] > member
	I1109 10:30:53.374358   29322 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1109 10:30:53.374369   29322 kubeadm.go:627] restartCluster start
	I1109 10:30:53.374423   29322 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 10:30:53.381136   29322 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:53.381225   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:53.437830   29322 kubeconfig.go:135] verify returned: extract IP: "multinode-102528" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:30:53.437916   29322 kubeconfig.go:146] "multinode-102528" context is missing from /Users/jenkins/minikube-integration/15331-22028/kubeconfig - will repair!
	I1109 10:30:53.438155   29322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/kubeconfig: {Name:mk02bb1c68cad934afd737965b2dbda8f5a4ba2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:30:53.438588   29322 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:30:53.438795   29322 kapi.go:59] client config for multinode-102528: &rest.Config{Host:"https://127.0.0.1:62610", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:30:53.439169   29322 cert_rotation.go:137] Starting client certificate rotation controller
	I1109 10:30:53.439356   29322 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 10:30:53.447263   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:53.447332   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:53.455459   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:53.657595   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:53.657758   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:53.668765   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:53.857565   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:53.857774   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:53.869120   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:54.057149   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:54.057276   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:54.068168   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:54.257584   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:54.257771   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:54.268450   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:54.457613   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:54.457773   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:54.469267   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:54.657565   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:54.657723   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:54.668532   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:54.857536   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:54.857689   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:54.868882   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:55.057557   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:55.057740   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:55.068709   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:55.255816   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:55.255953   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:55.266592   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:55.457523   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:55.457729   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:55.468103   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:55.657517   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:55.657728   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:55.668440   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:55.857500   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:55.857659   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:55.869310   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.057496   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:56.057690   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:56.068480   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.257530   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:56.257710   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:56.268284   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.457521   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:56.457707   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:56.468336   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.468346   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:56.468400   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:56.476845   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.476863   29322 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1109 10:30:56.476880   29322 kubeadm.go:1114] stopping kube-system containers ...
	I1109 10:30:56.476962   29322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 10:30:56.499057   29322 command_runner.go:130] > 87217a284b95
	I1109 10:30:56.499068   29322 command_runner.go:130] > f24399907a45
	I1109 10:30:56.499076   29322 command_runner.go:130] > acd607123986
	I1109 10:30:56.499080   29322 command_runner.go:130] > 246636dd97e8
	I1109 10:30:56.499084   29322 command_runner.go:130] > 744e86ae21f6
	I1109 10:30:56.499089   29322 command_runner.go:130] > a72eb1f58fc3
	I1109 10:30:56.499092   29322 command_runner.go:130] > 1e9e9464a654
	I1109 10:30:56.499097   29322 command_runner.go:130] > 706558a4ed10
	I1109 10:30:56.499100   29322 command_runner.go:130] > 28b3a05115ad
	I1109 10:30:56.499104   29322 command_runner.go:130] > 78e4ea2c8ae0
	I1109 10:30:56.499108   29322 command_runner.go:130] > 652c7e303fdd
	I1109 10:30:56.499111   29322 command_runner.go:130] > 4e785d9e3405
	I1109 10:30:56.499116   29322 command_runner.go:130] > b1b331d84fd3
	I1109 10:30:56.499119   29322 command_runner.go:130] > 8b8ad03da153
	I1109 10:30:56.499122   29322 command_runner.go:130] > f969ced4e9d4
	I1109 10:30:56.499126   29322 command_runner.go:130] > efc1daab7958
	I1109 10:30:56.499130   29322 command_runner.go:130] > a0c4641044c8
	I1109 10:30:56.499133   29322 command_runner.go:130] > 7272fd486970
	I1109 10:30:56.499136   29322 command_runner.go:130] > 08723ade2218
	I1109 10:30:56.499141   29322 command_runner.go:130] > 23a2523fd3db
	I1109 10:30:56.499144   29322 command_runner.go:130] > 52deb537c4a0
	I1109 10:30:56.499155   29322 command_runner.go:130] > bac09f656d79
	I1109 10:30:56.499159   29322 command_runner.go:130] > 23053176a325
	I1109 10:30:56.499162   29322 command_runner.go:130] > ae39c6ec78b2
	I1109 10:30:56.499165   29322 command_runner.go:130] > 451b1fa8d38e
	I1109 10:30:56.499169   29322 command_runner.go:130] > 7ae33b58e2a6
	I1109 10:30:56.499172   29322 command_runner.go:130] > c1448cffd21f
	I1109 10:30:56.499176   29322 command_runner.go:130] > 7acd1c43832d
	I1109 10:30:56.499180   29322 command_runner.go:130] > 91faabc25d49
	I1109 10:30:56.499184   29322 command_runner.go:130] > 7d98acbd674e
	I1109 10:30:56.499187   29322 command_runner.go:130] > 5d9e6129376f
	I1109 10:30:56.499191   29322 command_runner.go:130] > 9a033e5f8d9b
	I1109 10:30:56.501398   29322 docker.go:444] Stopping containers: [87217a284b95 f24399907a45 acd607123986 246636dd97e8 744e86ae21f6 a72eb1f58fc3 1e9e9464a654 706558a4ed10 28b3a05115ad 78e4ea2c8ae0 652c7e303fdd 4e785d9e3405 b1b331d84fd3 8b8ad03da153 f969ced4e9d4 efc1daab7958 a0c4641044c8 7272fd486970 08723ade2218 23a2523fd3db 52deb537c4a0 bac09f656d79 23053176a325 ae39c6ec78b2 451b1fa8d38e 7ae33b58e2a6 c1448cffd21f 7acd1c43832d 91faabc25d49 7d98acbd674e 5d9e6129376f 9a033e5f8d9b]
	I1109 10:30:56.501499   29322 ssh_runner.go:195] Run: docker stop 87217a284b95 f24399907a45 acd607123986 246636dd97e8 744e86ae21f6 a72eb1f58fc3 1e9e9464a654 706558a4ed10 28b3a05115ad 78e4ea2c8ae0 652c7e303fdd 4e785d9e3405 b1b331d84fd3 8b8ad03da153 f969ced4e9d4 efc1daab7958 a0c4641044c8 7272fd486970 08723ade2218 23a2523fd3db 52deb537c4a0 bac09f656d79 23053176a325 ae39c6ec78b2 451b1fa8d38e 7ae33b58e2a6 c1448cffd21f 7acd1c43832d 91faabc25d49 7d98acbd674e 5d9e6129376f 9a033e5f8d9b
	I1109 10:30:56.526736   29322 command_runner.go:130] > 87217a284b95
	I1109 10:30:56.526853   29322 command_runner.go:130] > f24399907a45
	I1109 10:30:56.526861   29322 command_runner.go:130] > acd607123986
	I1109 10:30:56.526865   29322 command_runner.go:130] > 246636dd97e8
	I1109 10:30:56.526875   29322 command_runner.go:130] > 744e86ae21f6
	I1109 10:30:56.526879   29322 command_runner.go:130] > a72eb1f58fc3
	I1109 10:30:56.526884   29322 command_runner.go:130] > 1e9e9464a654
	I1109 10:30:56.527234   29322 command_runner.go:130] > 706558a4ed10
	I1109 10:30:56.527240   29322 command_runner.go:130] > 28b3a05115ad
	I1109 10:30:56.527249   29322 command_runner.go:130] > 78e4ea2c8ae0
	I1109 10:30:56.527252   29322 command_runner.go:130] > 652c7e303fdd
	I1109 10:30:56.527255   29322 command_runner.go:130] > 4e785d9e3405
	I1109 10:30:56.527259   29322 command_runner.go:130] > b1b331d84fd3
	I1109 10:30:56.527646   29322 command_runner.go:130] > 8b8ad03da153
	I1109 10:30:56.527654   29322 command_runner.go:130] > f969ced4e9d4
	I1109 10:30:56.527660   29322 command_runner.go:130] > efc1daab7958
	I1109 10:30:56.527688   29322 command_runner.go:130] > a0c4641044c8
	I1109 10:30:56.527696   29322 command_runner.go:130] > 7272fd486970
	I1109 10:30:56.527700   29322 command_runner.go:130] > 08723ade2218
	I1109 10:30:56.527711   29322 command_runner.go:130] > 23a2523fd3db
	I1109 10:30:56.527718   29322 command_runner.go:130] > 52deb537c4a0
	I1109 10:30:56.527722   29322 command_runner.go:130] > bac09f656d79
	I1109 10:30:56.527731   29322 command_runner.go:130] > 23053176a325
	I1109 10:30:56.527735   29322 command_runner.go:130] > ae39c6ec78b2
	I1109 10:30:56.527738   29322 command_runner.go:130] > 451b1fa8d38e
	I1109 10:30:56.527742   29322 command_runner.go:130] > 7ae33b58e2a6
	I1109 10:30:56.527745   29322 command_runner.go:130] > c1448cffd21f
	I1109 10:30:56.527749   29322 command_runner.go:130] > 7acd1c43832d
	I1109 10:30:56.527752   29322 command_runner.go:130] > 91faabc25d49
	I1109 10:30:56.527756   29322 command_runner.go:130] > 7d98acbd674e
	I1109 10:30:56.527759   29322 command_runner.go:130] > 5d9e6129376f
	I1109 10:30:56.527763   29322 command_runner.go:130] > 9a033e5f8d9b
	I1109 10:30:56.530164   29322 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1109 10:30:56.540298   29322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 10:30:56.548234   29322 command_runner.go:130] > -rw------- 1 root root 5639 Nov  9 18:25 /etc/kubernetes/admin.conf
	I1109 10:30:56.548245   29322 command_runner.go:130] > -rw------- 1 root root 5656 Nov  9 18:28 /etc/kubernetes/controller-manager.conf
	I1109 10:30:56.548251   29322 command_runner.go:130] > -rw------- 1 root root 2003 Nov  9 18:25 /etc/kubernetes/kubelet.conf
	I1109 10:30:56.548258   29322 command_runner.go:130] > -rw------- 1 root root 5600 Nov  9 18:28 /etc/kubernetes/scheduler.conf
	I1109 10:30:56.548268   29322 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  9 18:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov  9 18:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Nov  9 18:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  9 18:28 /etc/kubernetes/scheduler.conf
	
	I1109 10:30:56.548324   29322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 10:30:56.555474   29322 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1109 10:30:56.556287   29322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 10:30:56.563281   29322 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1109 10:30:56.564170   29322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 10:30:56.571047   29322 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.571107   29322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 10:30:56.578579   29322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 10:30:56.585908   29322 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.585969   29322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 10:30:56.592750   29322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 10:30:56.599818   29322 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1109 10:30:56.599828   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:30:56.641004   29322 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 10:30:56.641099   29322 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1109 10:30:56.641340   29322 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1109 10:30:56.641600   29322 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1109 10:30:56.641790   29322 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1109 10:30:56.642178   29322 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1109 10:30:56.642469   29322 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1109 10:30:56.642615   29322 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1109 10:30:56.643001   29322 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1109 10:30:56.643190   29322 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1109 10:30:56.643371   29322 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1109 10:30:56.643514   29322 command_runner.go:130] > [certs] Using the existing "sa" key
	I1109 10:30:56.646486   29322 command_runner.go:130] ! W1109 18:30:56.643462    1200 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:30:56.646503   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:30:56.688102   29322 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 10:30:56.958325   29322 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1109 10:30:57.071158   29322 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I1109 10:30:57.585147   29322 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 10:30:57.725789   29322 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 10:30:57.730695   29322 command_runner.go:130] ! W1109 18:30:56.690457    1210 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:30:57.730717   29322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084226348s)
	I1109 10:30:57.730735   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:30:57.782863   29322 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 10:30:57.783496   29322 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 10:30:57.783652   29322 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1109 10:30:57.857477   29322 command_runner.go:130] ! W1109 18:30:57.776830    1232 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:30:57.857501   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:30:57.898838   29322 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 10:30:57.898862   29322 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 10:30:57.901424   29322 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 10:30:57.902007   29322 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 10:30:57.905954   29322 command_runner.go:130] ! W1109 18:30:57.902040    1266 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:30:57.905978   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:30:57.963979   29322 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 10:30:57.969707   29322 command_runner.go:130] ! W1109 18:30:57.966647    1279 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:30:57.969737   29322 api_server.go:51] waiting for apiserver process to appear ...
	I1109 10:30:57.969848   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:30:58.525853   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:30:59.026493   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:30:59.525289   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:31:00.027290   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:31:00.036231   29322 command_runner.go:130] > 1777
	I1109 10:31:00.037099   29322 api_server.go:71] duration metric: took 2.067416783s to wait for apiserver process to appear ...
	I1109 10:31:00.037110   29322 api_server.go:87] waiting for apiserver healthz status ...
	I1109 10:31:00.037123   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:05.037242   29322 api_server.go:268] stopped: https://127.0.0.1:62610/healthz: Get "https://127.0.0.1:62610/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 10:31:05.537352   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:07.877017   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 10:31:07.877032   29322 api_server.go:102] status: https://127.0.0.1:62610/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 10:31:08.037267   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:08.044879   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 10:31:08.044899   29322 api_server.go:102] status: https://127.0.0.1:62610/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:31:08.538623   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:08.545590   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 10:31:08.560546   29322 api_server.go:102] status: https://127.0.0.1:62610/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:31:09.037234   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:09.043931   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 10:31:09.043950   29322 api_server.go:102] status: https://127.0.0.1:62610/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:31:09.537526   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:09.543733   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 200:
	ok
	I1109 10:31:09.543794   29322 round_trippers.go:463] GET https://127.0.0.1:62610/version
	I1109 10:31:09.543804   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:09.543814   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:09.543821   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:09.550517   29322 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1109 10:31:09.550529   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:09.550536   29322 round_trippers.go:580]     Audit-Id: 34c33ead-36cb-43db-afd0-3df0bf4099db
	I1109 10:31:09.550542   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:09.550546   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:09.550551   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:09.550556   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:09.550561   29322 round_trippers.go:580]     Content-Length: 263
	I1109 10:31:09.550565   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:09 GMT
	I1109 10:31:09.550584   29322 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1109 10:31:09.550636   29322 api_server.go:140] control plane version: v1.25.3
	I1109 10:31:09.550644   29322 api_server.go:130] duration metric: took 9.513780893s to wait for apiserver health ...
	I1109 10:31:09.550651   29322 cni.go:95] Creating CNI manager for ""
	I1109 10:31:09.550657   29322 cni.go:156] 2 nodes found, recommending kindnet
	I1109 10:31:09.589534   29322 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1109 10:31:09.626427   29322 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 10:31:09.634186   29322 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1109 10:31:09.634204   29322 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I1109 10:31:09.634209   29322 command_runner.go:130] > Device: 8fh/143d	Inode: 2102734     Links: 1
	I1109 10:31:09.634247   29322 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1109 10:31:09.634260   29322 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I1109 10:31:09.634265   29322 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I1109 10:31:09.634269   29322 command_runner.go:130] > Change: 2022-11-09 18:03:43.031940595 +0000
	I1109 10:31:09.634272   29322 command_runner.go:130] >  Birth: -
	I1109 10:31:09.634321   29322 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1109 10:31:09.634327   29322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1109 10:31:09.651799   29322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 10:31:10.341562   29322 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1109 10:31:10.343883   29322 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1109 10:31:10.345044   29322 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1109 10:31:10.353898   29322 command_runner.go:130] > daemonset.apps/kindnet configured
	I1109 10:31:10.360090   29322 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 10:31:10.360161   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:10.360170   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.360177   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.360183   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.363951   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:10.363971   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.363986   29322 round_trippers.go:580]     Audit-Id: b33087c4-84f3-4d4c-ac3c-4b7b24f702c3
	I1109 10:31:10.363996   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.364005   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.364012   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.364018   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.364024   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.365203   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"990"},"items":[{"metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85432 chars]
	I1109 10:31:10.368224   29322 system_pods.go:59] 12 kube-system pods found
	I1109 10:31:10.368243   29322 system_pods.go:61] "coredns-565d847f94-fx6lt" [680c8c15-39e0-4143-8dfd-30727e628800] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 10:31:10.368248   29322 system_pods.go:61] "etcd-multinode-102528" [5dde8340-2916-4da6-91aa-ea6dfe24a5ad] Running
	I1109 10:31:10.368252   29322 system_pods.go:61] "kindnet-6kjz8" [b34e8f27-542c-40de-80a7-cf1226429128] Running
	I1109 10:31:10.368256   29322 system_pods.go:61] "kindnet-9td8m" [bb563027-b991-4b95-921a-ee4687934118] Running
	I1109 10:31:10.368259   29322 system_pods.go:61] "kindnet-z66sn" [03cc3962-c1e0-444a-8743-743e707bf96d] Running
	I1109 10:31:10.368264   29322 system_pods.go:61] "kube-apiserver-multinode-102528" [f48fa313-e8ec-42bc-87bc-7daede794fe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 10:31:10.368270   29322 system_pods.go:61] "kube-controller-manager-multinode-102528" [3dd056ba-22b5-4b0c-aa7e-9e00d215df9a] Running
	I1109 10:31:10.368275   29322 system_pods.go:61] "kube-proxy-9wsxp" [03c6822b-9fef-4fa3-82a3-bb5082cf31b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 10:31:10.368278   29322 system_pods.go:61] "kube-proxy-c4lh6" [e9055586-6022-464a-acdd-6fce3c87392b] Running
	I1109 10:31:10.368282   29322 system_pods.go:61] "kube-proxy-kh6r6" [de2bad4b-35b4-4537-a6a3-7acd77c63e69] Running
	I1109 10:31:10.368286   29322 system_pods.go:61] "kube-scheduler-multinode-102528" [26dff845-4103-4884-86e3-42c37dc577c0] Running
	I1109 10:31:10.368292   29322 system_pods.go:61] "storage-provisioner" [5c5e247e-06db-434c-af4a-91a2c2a08779] Running
	I1109 10:31:10.368296   29322 system_pods.go:74] duration metric: took 8.196308ms to wait for pod list to return data ...
	I1109 10:31:10.368303   29322 node_conditions.go:102] verifying NodePressure condition ...
	I1109 10:31:10.368337   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes
	I1109 10:31:10.368342   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.368349   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.368355   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.371241   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:10.371252   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.371258   29322 round_trippers.go:580]     Audit-Id: 9d2bee65-a8b0-4c5c-9d33-2e0f112cfe85
	I1109 10:31:10.371274   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.371282   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.371287   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.371295   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.371300   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.371379   29322 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"990"},"items":[{"metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10902 chars]
	I1109 10:31:10.371833   29322 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:31:10.371844   29322 node_conditions.go:123] node cpu capacity is 6
	I1109 10:31:10.371855   29322 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:31:10.371858   29322 node_conditions.go:123] node cpu capacity is 6
	I1109 10:31:10.371876   29322 node_conditions.go:105] duration metric: took 3.56331ms to run NodePressure ...
	I1109 10:31:10.371890   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:31:10.478743   29322 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1109 10:31:10.516073   29322 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1109 10:31:10.519464   29322 command_runner.go:130] ! W1109 18:31:10.446328    2887 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:31:10.519485   29322 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1109 10:31:10.519541   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I1109 10:31:10.519546   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.519552   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.519558   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.523051   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:10.523061   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.523067   29322 round_trippers.go:580]     Audit-Id: be75f9b2-da9d-4c2a-bb4d-8708055cafab
	I1109 10:31:10.523072   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.523080   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.523087   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.523092   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.523097   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.523284   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"993"},"items":[{"metadata":{"name":"etcd-multinode-102528","namespace":"kube-system","uid":"5dde8340-2916-4da6-91aa-ea6dfe24a5ad","resourceVersion":"760","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.mirror":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.seen":"2022-11-09T18:25:54.343403314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30634 chars]
	I1109 10:31:10.524047   29322 kubeadm.go:778] kubelet initialised
	I1109 10:31:10.524056   29322 kubeadm.go:779] duration metric: took 4.560079ms waiting for restarted kubelet to initialise ...
	I1109 10:31:10.524062   29322 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 10:31:10.524096   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:10.524101   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.524108   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.524114   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.528593   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:10.528605   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.528611   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.528615   29322 round_trippers.go:580]     Audit-Id: 1ddadfb1-bb80-443f-86a9-c09f461ccebb
	I1109 10:31:10.528620   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.528627   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.528631   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.528637   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.530236   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"993"},"items":[{"metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85432 chars]
	I1109 10:31:10.532148   29322 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-fx6lt" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:10.532191   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:10.532197   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.532203   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.532209   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.534114   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:10.534124   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.534132   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.534139   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.534144   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.534149   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.534153   29322 round_trippers.go:580]     Audit-Id: 0ef76549-eef7-46b4-9eae-80919bc16550
	I1109 10:31:10.534165   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.534429   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:10.534715   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:10.534722   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.534728   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.534733   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.536900   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:10.536909   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.536914   29322 round_trippers.go:580]     Audit-Id: 56354366-fc07-4398-9895-9d000dba0270
	I1109 10:31:10.536922   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.536932   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.536937   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.536942   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.536947   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.536997   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:11.038168   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:11.038189   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:11.038201   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:11.038211   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:11.041651   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:11.041673   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:11.041683   29322 round_trippers.go:580]     Audit-Id: d0c00871-52c3-4e1d-af98-0213bfdebca8
	I1109 10:31:11.041713   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:11.041729   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:11.041739   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:11.041749   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:11.041768   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:11 GMT
	I1109 10:31:11.042016   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:11.042303   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:11.042309   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:11.042315   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:11.042321   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:11.044351   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:11.044360   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:11.044366   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:11.044371   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:11 GMT
	I1109 10:31:11.044376   29322 round_trippers.go:580]     Audit-Id: 15db7ac2-66ab-4151-9b88-b8154b6e6005
	I1109 10:31:11.044381   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:11.044385   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:11.044390   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:11.044436   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:11.539063   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:11.539084   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:11.539096   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:11.539106   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:11.542945   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:11.542960   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:11.542967   29322 round_trippers.go:580]     Audit-Id: 89c67b66-1bba-40ff-bc44-ccbad45cde76
	I1109 10:31:11.542974   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:11.542981   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:11.542987   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:11.542997   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:11.543003   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:11 GMT
	I1109 10:31:11.543103   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:11.543398   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:11.543404   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:11.543412   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:11.543418   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:11.545265   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:11.545274   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:11.545280   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:11.545285   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:11.545293   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:11.545297   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:11.545303   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:11 GMT
	I1109 10:31:11.545310   29322 round_trippers.go:580]     Audit-Id: 6ec58151-372f-4f6b-85f3-ed59100c8fe0
	I1109 10:31:11.545685   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:12.037449   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:12.037475   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:12.037490   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:12.037502   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:12.040764   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:12.040774   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:12.040780   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:12.040784   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:12.040791   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:12 GMT
	I1109 10:31:12.040797   29322 round_trippers.go:580]     Audit-Id: f55229c7-5505-41f3-adbb-59f88235ba56
	I1109 10:31:12.040802   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:12.040833   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:12.040998   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:12.041284   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:12.041290   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:12.041296   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:12.041302   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:12.043117   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:12.043126   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:12.043132   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:12.043137   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:12.043142   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:12 GMT
	I1109 10:31:12.043163   29322 round_trippers.go:580]     Audit-Id: 506e55cb-4e1a-4ef5-a772-acfa8e24556e
	I1109 10:31:12.043176   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:12.043183   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:12.043235   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:12.537958   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:12.537981   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:12.537994   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:12.538004   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:12.541582   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:12.541597   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:12.541604   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:12.541610   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:12.541617   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:12 GMT
	I1109 10:31:12.541624   29322 round_trippers.go:580]     Audit-Id: d6be5be2-15d6-4b0a-895a-fd6796e8ab86
	I1109 10:31:12.541631   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:12.541637   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:12.541854   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:12.542232   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:12.542239   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:12.542245   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:12.542250   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:12.544130   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:12.544143   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:12.544149   29322 round_trippers.go:580]     Audit-Id: d478f378-638b-411e-a2d0-bb9fc87f2236
	I1109 10:31:12.544154   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:12.544159   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:12.544166   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:12.544171   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:12.544176   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:12 GMT
	I1109 10:31:12.544221   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:12.544410   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:13.037700   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:13.037719   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:13.037732   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:13.037742   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:13.041492   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:13.041507   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:13.041515   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:13.041521   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:13.041528   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:13.041534   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:13.041540   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:13 GMT
	I1109 10:31:13.041548   29322 round_trippers.go:580]     Audit-Id: fd31528d-08a4-4c10-ad69-2b587887eefa
	I1109 10:31:13.041640   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:13.041980   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:13.041987   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:13.041994   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:13.041999   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:13.044189   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:13.044198   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:13.044204   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:13.044209   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:13.044214   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:13.044219   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:13.044223   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:13 GMT
	I1109 10:31:13.044228   29322 round_trippers.go:580]     Audit-Id: f5f133b0-b27f-4d4b-bc06-f336e87d6e47
	I1109 10:31:13.044280   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:13.537367   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:13.558119   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:13.558136   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:13.558150   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:13.561927   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:13.561941   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:13.561955   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:13 GMT
	I1109 10:31:13.561961   29322 round_trippers.go:580]     Audit-Id: ddfda5c6-9e1b-49db-aa32-fbae08a710f8
	I1109 10:31:13.561967   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:13.561974   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:13.561979   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:13.561984   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:13.562270   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:13.562560   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:13.562566   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:13.562572   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:13.562578   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:13.564444   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:13.564454   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:13.564459   29322 round_trippers.go:580]     Audit-Id: 9a04d481-3ffa-4e14-9794-7a6c9d40908b
	I1109 10:31:13.564464   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:13.564469   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:13.564474   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:13.564479   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:13.564487   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:13 GMT
	I1109 10:31:13.564814   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:14.039405   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:14.039428   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:14.039442   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:14.039453   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:14.043121   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:14.043144   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:14.043154   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:14.043160   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:14.043167   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:14.043174   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:14 GMT
	I1109 10:31:14.043180   29322 round_trippers.go:580]     Audit-Id: 56795e2f-8c6e-4057-9d2e-a4779f85a832
	I1109 10:31:14.043187   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:14.043263   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:14.043637   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:14.043644   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:14.043650   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:14.043656   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:14.045641   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:14.045651   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:14.045656   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:14 GMT
	I1109 10:31:14.045661   29322 round_trippers.go:580]     Audit-Id: 64e6f601-7b65-4744-86e3-fe0eb676868c
	I1109 10:31:14.045666   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:14.045671   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:14.045678   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:14.045685   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:14.045823   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:14.539341   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:14.539361   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:14.539374   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:14.539383   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:14.543132   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:14.543147   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:14.543155   29322 round_trippers.go:580]     Audit-Id: 20a34b51-ac18-43a8-8435-4d32fc87bb5f
	I1109 10:31:14.543161   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:14.543169   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:14.543175   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:14.543188   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:14.543196   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:14 GMT
	I1109 10:31:14.543283   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:14.543664   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:14.543671   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:14.543677   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:14.543682   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:14.545442   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:14.545452   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:14.545458   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:14.545463   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:14.545468   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:14 GMT
	I1109 10:31:14.545473   29322 round_trippers.go:580]     Audit-Id: c874ff18-df68-4a94-86c9-9e3af3d78370
	I1109 10:31:14.545478   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:14.545482   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:14.545534   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:14.545719   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:15.038493   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:15.038513   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:15.038531   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:15.038585   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:15.042183   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:15.042196   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:15.042203   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:15 GMT
	I1109 10:31:15.042210   29322 round_trippers.go:580]     Audit-Id: 7295c00e-9d84-46e5-87cf-1c4b94168b7c
	I1109 10:31:15.042216   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:15.042223   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:15.042230   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:15.042236   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:15.042343   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:15.042743   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:15.042751   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:15.042757   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:15.042762   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:15.047950   29322 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1109 10:31:15.047961   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:15.047968   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:15.047973   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:15.047979   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:15.047983   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:15.047988   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:15 GMT
	I1109 10:31:15.047993   29322 round_trippers.go:580]     Audit-Id: bc0730a0-0f15-472d-a7bf-bb00cba9df66
	I1109 10:31:15.048057   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:15.539343   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:15.539365   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:15.539385   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:15.539429   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:15.543198   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:15.543213   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:15.543221   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:15.543227   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:15.543245   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:15.543252   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:15.543258   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:15 GMT
	I1109 10:31:15.543264   29322 round_trippers.go:580]     Audit-Id: 58fe1134-cc6c-451b-b309-1901404af2da
	I1109 10:31:15.543657   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:15.544517   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:15.544528   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:15.544538   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:15.544546   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:15.546842   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:15.546852   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:15.546858   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:15.546863   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:15 GMT
	I1109 10:31:15.546868   29322 round_trippers.go:580]     Audit-Id: 79bdfd40-aee9-4094-b929-9cf84efb694f
	I1109 10:31:15.546873   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:15.546877   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:15.546883   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:15.547071   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:16.039322   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:16.039345   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:16.039357   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:16.039368   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:16.043120   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:16.043137   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:16.043145   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:16 GMT
	I1109 10:31:16.043152   29322 round_trippers.go:580]     Audit-Id: 7d02b36d-3780-43e4-879a-917456fe14b9
	I1109 10:31:16.043161   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:16.043167   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:16.043173   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:16.043180   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:16.043269   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:16.043696   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:16.043703   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:16.043710   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:16.043715   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:16.045563   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:16.045573   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:16.045579   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:16.045585   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:16.045591   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:16.045596   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:16.045600   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:16 GMT
	I1109 10:31:16.045605   29322 round_trippers.go:580]     Audit-Id: 67719710-afc1-46e7-ac0c-2b0223786666
	I1109 10:31:16.045801   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:16.537395   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:16.537418   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:16.537431   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:16.537441   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:16.541163   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:16.541186   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:16.541198   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:16.541208   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:16.541217   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:16.541223   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:16 GMT
	I1109 10:31:16.541229   29322 round_trippers.go:580]     Audit-Id: e8f4d533-8ba4-4194-bde5-3e0766997228
	I1109 10:31:16.541237   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:16.541345   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:16.541662   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:16.541670   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:16.541676   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:16.541688   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:16.543532   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:16.543546   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:16.543574   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:16.543582   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:16.543587   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:16.543592   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:16 GMT
	I1109 10:31:16.543600   29322 round_trippers.go:580]     Audit-Id: 74678dc9-0b76-488a-b4bf-6b60b3991d71
	I1109 10:31:16.543606   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:16.543808   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:17.037367   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:17.037393   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:17.037405   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:17.037419   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:17.041448   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:17.041460   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:17.041465   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:17.041473   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:17.041478   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:17.041483   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:17.041487   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:17 GMT
	I1109 10:31:17.041493   29322 round_trippers.go:580]     Audit-Id: 89ff4584-216d-47c9-b463-b8ca8e440134
	I1109 10:31:17.041547   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:17.041833   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:17.041839   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:17.041846   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:17.041851   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:17.043870   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:17.043880   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:17.043886   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:17.043891   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:17.043896   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:17 GMT
	I1109 10:31:17.043901   29322 round_trippers.go:580]     Audit-Id: 9b7d2f6a-c665-4e94-b99a-f26d28212e61
	I1109 10:31:17.043906   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:17.043911   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:17.043956   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:17.044133   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:17.537355   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:17.537379   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:17.537392   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:17.537403   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:17.540795   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:17.540811   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:17.540820   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:17.540826   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:17.540833   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:17.540840   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:17 GMT
	I1109 10:31:17.540846   29322 round_trippers.go:580]     Audit-Id: 19770046-1610-4c36-9ed3-0639b14fa8ef
	I1109 10:31:17.540852   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:17.540949   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:17.541277   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:17.541283   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:17.541289   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:17.541294   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:17.543125   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:17.543136   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:17.543141   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:17 GMT
	I1109 10:31:17.543147   29322 round_trippers.go:580]     Audit-Id: 6dfd279f-2895-423b-a34d-1ea7aa6d494a
	I1109 10:31:17.543152   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:17.543157   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:17.543162   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:17.543166   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:17.543215   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:18.037606   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:18.037630   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:18.037643   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:18.037653   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:18.040964   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:18.040980   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:18.040987   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:18.040994   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:18.041000   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:18.041006   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:18.041016   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:18 GMT
	I1109 10:31:18.041023   29322 round_trippers.go:580]     Audit-Id: 5a254ee7-624e-4812-bef7-b25d95372942
	I1109 10:31:18.041408   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:18.041761   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:18.041769   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:18.041777   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:18.041782   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:18.043528   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:18.043538   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:18.043544   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:18.043552   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:18.043557   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:18 GMT
	I1109 10:31:18.043562   29322 round_trippers.go:580]     Audit-Id: c09d6bf8-3033-4b51-87eb-a4cea66ca9c0
	I1109 10:31:18.043569   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:18.043574   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:18.043619   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:18.537272   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:18.558965   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:18.558988   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:18.558999   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:18.562923   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:18.562937   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:18.562945   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:18.562952   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:18.562959   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:18.562966   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:18 GMT
	I1109 10:31:18.562973   29322 round_trippers.go:580]     Audit-Id: c73ea383-9db8-4997-aa83-0d7e7defc95f
	I1109 10:31:18.562979   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:18.563054   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:18.563446   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:18.563452   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:18.563458   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:18.563464   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:18.565559   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:18.565569   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:18.565574   29322 round_trippers.go:580]     Audit-Id: 1f7a7220-015b-48cd-85af-0338196439ee
	I1109 10:31:18.565581   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:18.565586   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:18.565591   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:18.565596   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:18.565600   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:18 GMT
	I1109 10:31:18.565806   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:19.037704   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:19.037727   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:19.037740   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:19.037750   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:19.041220   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:19.041236   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:19.041243   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:19 GMT
	I1109 10:31:19.041250   29322 round_trippers.go:580]     Audit-Id: 7f0a2e74-69af-4abb-a8cd-b7026f5f3146
	I1109 10:31:19.041256   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:19.041263   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:19.041269   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:19.041275   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:19.041692   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:19.042078   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:19.042085   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:19.042091   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:19.042096   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:19.043850   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:19.043859   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:19.043865   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:19.043870   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:19 GMT
	I1109 10:31:19.043875   29322 round_trippers.go:580]     Audit-Id: d80df721-9f64-494d-b981-1d67059701c9
	I1109 10:31:19.043879   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:19.043884   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:19.043891   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:19.043943   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:19.044123   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:19.537264   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:19.537285   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:19.537298   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:19.537308   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:19.541005   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:19.541025   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:19.541036   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:19 GMT
	I1109 10:31:19.541045   29322 round_trippers.go:580]     Audit-Id: 55b23125-a937-4e13-a1cf-57d66fcbe7a6
	I1109 10:31:19.541054   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:19.541064   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:19.541073   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:19.541081   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:19.541485   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:19.541779   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:19.541786   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:19.541792   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:19.541797   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:19.543600   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:19.543613   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:19.543619   29322 round_trippers.go:580]     Audit-Id: f7156a6a-5ddc-4f69-ad81-f5a48d5522c0
	I1109 10:31:19.543624   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:19.543629   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:19.543634   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:19.543641   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:19.543646   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:19 GMT
	I1109 10:31:19.543993   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:20.037954   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:20.037976   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:20.037989   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:20.038001   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:20.041834   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:20.041848   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:20.041855   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:20.041862   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:20.041869   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:20.041875   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:20 GMT
	I1109 10:31:20.041881   29322 round_trippers.go:580]     Audit-Id: b241b992-06b9-4c52-8a1f-e90c3e9cf7a5
	I1109 10:31:20.041888   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:20.041952   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:20.042322   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:20.042331   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:20.042339   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:20.042346   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:20.044447   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:20.044456   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:20.044462   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:20.044467   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:20 GMT
	I1109 10:31:20.044472   29322 round_trippers.go:580]     Audit-Id: 3e271430-ef86-43eb-8807-6dbac4fefa63
	I1109 10:31:20.044478   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:20.044482   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:20.044487   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:20.044531   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:20.538760   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:20.538783   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:20.538795   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:20.538805   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:20.542449   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:20.542464   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:20.542473   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:20.542479   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:20.542485   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:20.542493   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:20.542499   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:20 GMT
	I1109 10:31:20.542506   29322 round_trippers.go:580]     Audit-Id: 8574b39c-cb81-4884-b855-9feaebab2bdd
	I1109 10:31:20.542588   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:20.542966   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:20.542972   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:20.542978   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:20.542983   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:20.544704   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:20.544715   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:20.544722   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:20.544727   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:20 GMT
	I1109 10:31:20.544732   29322 round_trippers.go:580]     Audit-Id: 228808bc-e64a-4c81-8249-c4b9633c61f3
	I1109 10:31:20.544755   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:20.544764   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:20.544770   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:20.544838   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:21.037607   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:21.037653   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:21.037679   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:21.037686   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:21.040590   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:21.040602   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:21.040608   29322 round_trippers.go:580]     Audit-Id: 13f24be9-4b98-45a9-be86-288680789fe0
	I1109 10:31:21.040613   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:21.040618   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:21.040623   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:21.040628   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:21.040633   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:21 GMT
	I1109 10:31:21.040716   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:21.041009   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:21.041016   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:21.041022   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:21.041027   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:21.044014   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:21.044023   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:21.044031   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:21 GMT
	I1109 10:31:21.044038   29322 round_trippers.go:580]     Audit-Id: f3a0ab83-a8cb-4016-8b24-2fd9ff068801
	I1109 10:31:21.044044   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:21.044049   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:21.044053   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:21.044058   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:21.044384   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:21.044571   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:21.539201   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:21.539225   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:21.539239   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:21.539249   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:21.543261   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:21.543278   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:21.543286   29322 round_trippers.go:580]     Audit-Id: 79308c26-ad97-43d2-aa7f-dd56caf2a8ee
	I1109 10:31:21.543293   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:21.543299   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:21.543306   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:21.543313   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:21.543321   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:21 GMT
	I1109 10:31:21.543405   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:21.543782   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:21.543792   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:21.543801   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:21.543807   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:21.545567   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:21.545576   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:21.545581   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:21 GMT
	I1109 10:31:21.545586   29322 round_trippers.go:580]     Audit-Id: b47eb9a8-241d-40f3-be42-f3ec8653709c
	I1109 10:31:21.545591   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:21.545596   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:21.545601   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:21.545605   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:21.545644   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:22.039058   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:22.039080   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:22.039092   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:22.039102   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:22.042639   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:22.042652   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:22.042660   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:22.042667   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:22.042674   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:22.042682   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:22 GMT
	I1109 10:31:22.042688   29322 round_trippers.go:580]     Audit-Id: 31049b80-c4f3-458f-9961-645c61c01f13
	I1109 10:31:22.042695   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:22.042780   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:22.043152   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:22.043162   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:22.043170   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:22.043192   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:22.044928   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:22.044938   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:22.044943   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:22 GMT
	I1109 10:31:22.044948   29322 round_trippers.go:580]     Audit-Id: ab1133ae-333b-46ae-9710-2ca3ac93a902
	I1109 10:31:22.044954   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:22.044958   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:22.044963   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:22.044968   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:22.045265   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:22.537634   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:22.537655   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:22.537667   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:22.537678   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:22.541493   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:22.541507   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:22.541515   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:22.541521   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:22.541527   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:22 GMT
	I1109 10:31:22.541533   29322 round_trippers.go:580]     Audit-Id: f59afa58-62fa-4b4f-ad29-56d2ac96eca5
	I1109 10:31:22.541540   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:22.541547   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:22.541754   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:22.542073   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:22.542081   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:22.542087   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:22.542093   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:22.544210   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:22.544220   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:22.544226   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:22.544231   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:22.544236   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:22.544241   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:22 GMT
	I1109 10:31:22.544246   29322 round_trippers.go:580]     Audit-Id: fa3181b3-9b27-495c-9721-0a40ec1987fc
	I1109 10:31:22.544251   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:22.544388   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:23.039102   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:23.039130   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:23.039142   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:23.039152   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:23.043001   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:23.043016   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:23.043023   29322 round_trippers.go:580]     Audit-Id: 4242578c-4fc4-42fb-b273-9b98f23a40e9
	I1109 10:31:23.043030   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:23.043065   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:23.043071   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:23.043079   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:23.043085   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:23 GMT
	I1109 10:31:23.043369   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:23.043727   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:23.043734   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:23.043740   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:23.043747   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:23.045641   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:23.045651   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:23.045656   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:23.045661   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:23 GMT
	I1109 10:31:23.045666   29322 round_trippers.go:580]     Audit-Id: 1a934e76-2e59-4da4-8c0f-3821188f3d46
	I1109 10:31:23.045670   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:23.045675   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:23.045679   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:23.045911   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:23.046093   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:23.538389   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:23.560007   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:23.560028   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:23.560045   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:23.563769   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:23.563785   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:23.563793   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:23.563800   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:23 GMT
	I1109 10:31:23.563806   29322 round_trippers.go:580]     Audit-Id: 15c536c0-872d-4602-9033-5da1ef6085fb
	I1109 10:31:23.563813   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:23.563821   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:23.563828   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:23.563900   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:23.564276   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:23.564286   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:23.564294   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:23.564301   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:23.566173   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:23.566185   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:23.566194   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:23.566200   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:23.566206   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:23.566214   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:23 GMT
	I1109 10:31:23.566219   29322 round_trippers.go:580]     Audit-Id: e94ab873-969d-4056-b114-cfddb0f5bc30
	I1109 10:31:23.566228   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:23.566273   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:24.037923   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:24.037949   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:24.037962   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:24.037971   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:24.041889   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:24.041907   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:24.041915   29322 round_trippers.go:580]     Audit-Id: c00c5a61-38e8-4636-afcb-df4038460ae8
	I1109 10:31:24.041922   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:24.041928   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:24.041935   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:24.041941   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:24.041954   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:24 GMT
	I1109 10:31:24.042026   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:24.042405   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:24.042414   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:24.042425   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:24.042433   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:24.044468   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:24.044477   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:24.044482   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:24.044487   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:24.044492   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:24 GMT
	I1109 10:31:24.044497   29322 round_trippers.go:580]     Audit-Id: 6075aab7-d913-44f5-9943-f56225c923f3
	I1109 10:31:24.044501   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:24.044506   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:24.044547   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:24.537410   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:24.537439   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:24.537452   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:24.537462   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:24.541470   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:24.541490   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:24.541500   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:24.541513   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:24 GMT
	I1109 10:31:24.541524   29322 round_trippers.go:580]     Audit-Id: 8b58d677-8064-4da2-98d3-aec952625b5b
	I1109 10:31:24.541531   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:24.541538   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:24.541546   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:24.541630   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:24.541946   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:24.541952   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:24.541958   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:24.541964   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:24.544390   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:24.544400   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:24.544405   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:24.544410   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:24.544415   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:24.544420   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:24 GMT
	I1109 10:31:24.544425   29322 round_trippers.go:580]     Audit-Id: 85cbe0af-09cb-4389-8b76-4e0227a4a5b3
	I1109 10:31:24.544430   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:24.544557   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:25.039114   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:25.039137   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:25.039150   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:25.039160   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:25.043286   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:25.043303   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:25.043311   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:25 GMT
	I1109 10:31:25.043317   29322 round_trippers.go:580]     Audit-Id: 4ce982a8-6352-4b65-8810-34ef2d6cbe0e
	I1109 10:31:25.043323   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:25.043344   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:25.043362   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:25.043370   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:25.043590   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:25.043938   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:25.043944   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:25.043951   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:25.043957   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:25.045465   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:25.045479   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:25.045488   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:25 GMT
	I1109 10:31:25.045497   29322 round_trippers.go:580]     Audit-Id: adb18846-dc4d-4514-97e5-f8fd47f6cdaf
	I1109 10:31:25.045506   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:25.045514   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:25.045523   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:25.045530   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:25.045591   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:25.539042   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:25.539070   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:25.539082   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:25.539093   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:25.542959   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:25.542974   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:25.542982   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:25.542989   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:25 GMT
	I1109 10:31:25.542997   29322 round_trippers.go:580]     Audit-Id: 33b75cb4-ed56-4795-ab13-5dee43b9ab8f
	I1109 10:31:25.543004   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:25.543011   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:25.543017   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:25.543083   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:25.543477   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:25.543487   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:25.543495   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:25.543503   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:25.545588   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:25.545620   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:25.545629   29322 round_trippers.go:580]     Audit-Id: 6b0419be-bf94-4ca5-a6b1-c8b64071ab3b
	I1109 10:31:25.545640   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:25.545648   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:25.545655   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:25.545663   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:25.545669   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:25 GMT
	I1109 10:31:25.545868   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:25.546046   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:26.037505   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:26.037530   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:26.037543   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:26.037599   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:26.041250   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:26.041265   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:26.041273   29322 round_trippers.go:580]     Audit-Id: f57f4c25-1d8b-44e4-89e3-a7c84bb46e56
	I1109 10:31:26.041279   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:26.041286   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:26.041292   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:26.041299   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:26.041305   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:26 GMT
	I1109 10:31:26.041387   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:26.041685   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:26.041692   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:26.041698   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:26.041705   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:26.043575   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:26.043585   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:26.043591   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:26.043597   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:26.043602   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:26.043607   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:26 GMT
	I1109 10:31:26.043612   29322 round_trippers.go:580]     Audit-Id: 091ace2a-9c49-49c1-85e1-de080e5527c7
	I1109 10:31:26.043617   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:26.043651   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:26.538386   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:26.538408   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:26.538421   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:26.538430   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:26.541663   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:26.541677   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:26.541685   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:26.541696   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:26.541706   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:26 GMT
	I1109 10:31:26.541716   29322 round_trippers.go:580]     Audit-Id: f319c7f2-dcf6-4272-8565-dd9e964d3a8c
	I1109 10:31:26.541726   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:26.541733   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:26.542223   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:26.542574   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:26.542580   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:26.542586   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:26.542592   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:26.544227   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:26.544237   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:26.544244   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:26.544269   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:26 GMT
	I1109 10:31:26.544277   29322 round_trippers.go:580]     Audit-Id: df86717c-8944-41d3-9c15-7a5e5550c00c
	I1109 10:31:26.544281   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:26.544286   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:26.544292   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:26.544710   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:27.038661   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:27.038685   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:27.038697   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:27.038708   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:27.042372   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:27.042388   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:27.042396   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:27.042402   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:27.042424   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:27.042434   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:27 GMT
	I1109 10:31:27.042441   29322 round_trippers.go:580]     Audit-Id: d9f13960-c505-41d2-aba8-5d758fcc53b9
	I1109 10:31:27.042449   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:27.042512   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:27.042877   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:27.042883   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:27.042889   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:27.042895   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:27.044911   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:27.044921   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:27.044927   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:27.044932   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:27.044938   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:27.044943   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:27 GMT
	I1109 10:31:27.044948   29322 round_trippers.go:580]     Audit-Id: d0dc42a4-3ea4-4e97-a00f-9553dcf0ce0f
	I1109 10:31:27.044953   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:27.044992   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:27.539051   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:27.539113   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:27.539126   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:27.539139   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:27.543073   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:27.543090   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:27.543098   29322 round_trippers.go:580]     Audit-Id: 770c813b-9187-40da-bee6-fb7d8c4f2f97
	I1109 10:31:27.543104   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:27.543111   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:27.543119   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:27.543133   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:27.543145   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:27 GMT
	I1109 10:31:27.543212   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:27.543604   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:27.543614   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:27.543624   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:27.543631   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:27.545542   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:27.545551   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:27.545557   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:27.545562   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:27.545567   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:27.545572   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:27 GMT
	I1109 10:31:27.545576   29322 round_trippers.go:580]     Audit-Id: 3e759e8e-450a-4c2e-9ade-1210d49d4510
	I1109 10:31:27.545581   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:27.545615   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:28.037178   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:28.037202   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:28.037215   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:28.037225   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:28.040985   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:28.041000   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:28.041007   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:28.041014   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:28 GMT
	I1109 10:31:28.041021   29322 round_trippers.go:580]     Audit-Id: e28a1813-98d6-4e83-af9e-6d6463e72a3f
	I1109 10:31:28.041046   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:28.041057   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:28.041063   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:28.041133   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:28.041467   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:28.041474   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:28.041479   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:28.041485   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:28.043313   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:28.043323   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:28.043330   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:28.043339   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:28.043345   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:28.043351   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:28 GMT
	I1109 10:31:28.043358   29322 round_trippers.go:580]     Audit-Id: 45436f58-4394-40f1-8107-28f8e578100d
	I1109 10:31:28.043371   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:28.043524   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:28.043704   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:28.538150   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:28.559908   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:28.559925   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:28.559936   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:28.563849   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:28.563864   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:28.563871   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:28.563878   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:28.563885   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:28.563891   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:28 GMT
	I1109 10:31:28.563898   29322 round_trippers.go:580]     Audit-Id: d3bb2d7b-77a8-4827-8520-2ad9a9fca3db
	I1109 10:31:28.563905   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:28.563971   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:28.564351   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:28.564357   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:28.564363   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:28.564368   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:28.566281   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:28.566291   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:28.566296   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:28.566301   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:28.566307   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:28.566311   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:28 GMT
	I1109 10:31:28.566316   29322 round_trippers.go:580]     Audit-Id: aa4af8ea-f700-4156-9523-7de556a536a9
	I1109 10:31:28.566321   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:28.566353   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:29.037157   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:29.037181   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:29.037193   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:29.037204   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:29.040645   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:29.040662   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:29.040671   29322 round_trippers.go:580]     Audit-Id: f0c9a0b5-6349-4170-922c-9b479caaf39e
	I1109 10:31:29.040677   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:29.040684   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:29.040691   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:29.040697   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:29.040704   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:29 GMT
	I1109 10:31:29.040770   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:29.041147   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:29.041155   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:29.041164   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:29.041171   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:29.043066   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:29.043076   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:29.043081   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:29.043089   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:29.043094   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:29.043101   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:29.043105   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:29 GMT
	I1109 10:31:29.043110   29322 round_trippers.go:580]     Audit-Id: 66be2469-7220-47ec-9365-83ac84d8ae1a
	I1109 10:31:29.043166   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:29.537066   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:29.537090   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:29.537104   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:29.537119   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:29.540288   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:29.540300   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:29.540306   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:29.540310   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:29.540315   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:29 GMT
	I1109 10:31:29.540322   29322 round_trippers.go:580]     Audit-Id: b6c56e7e-c138-448a-a6d0-ae34d1a0568c
	I1109 10:31:29.540328   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:29.540334   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:29.540449   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:29.540746   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:29.540753   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:29.540759   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:29.540764   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:29.542869   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:29.542880   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:29.542886   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:29.542890   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:29 GMT
	I1109 10:31:29.542895   29322 round_trippers.go:580]     Audit-Id: 5a2748d7-f4ab-4b4b-8b95-e1070688bcdb
	I1109 10:31:29.542900   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:29.542904   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:29.542910   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:29.542962   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:30.038844   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:30.038872   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:30.038885   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:30.038894   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:30.042862   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:30.042880   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:30.042887   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:30.042893   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:30.042899   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:30.042906   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:30 GMT
	I1109 10:31:30.042913   29322 round_trippers.go:580]     Audit-Id: 9976e71e-fd62-4295-bce4-adb96423044b
	I1109 10:31:30.042919   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:30.043010   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:30.043387   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:30.043393   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:30.043399   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:30.043405   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:30.044981   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:30.044990   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:30.044999   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:30 GMT
	I1109 10:31:30.045005   29322 round_trippers.go:580]     Audit-Id: 98510584-9289-4dc8-9e8f-919a19b715c9
	I1109 10:31:30.045010   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:30.045014   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:30.045019   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:30.045024   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:30.045066   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:30.045238   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:30.537304   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:30.537326   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:30.537339   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:30.537349   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:30.541359   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:30.541370   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:30.541375   29322 round_trippers.go:580]     Audit-Id: 43e58940-c157-4c07-9a91-3546ce4517be
	I1109 10:31:30.541380   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:30.541384   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:30.541389   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:30.541394   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:30.541398   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:30 GMT
	I1109 10:31:30.541447   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:30.541733   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:30.541740   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:30.541745   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:30.541751   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:30.543712   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:30.543722   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:30.543727   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:30 GMT
	I1109 10:31:30.543732   29322 round_trippers.go:580]     Audit-Id: ad0c876b-9e09-4b53-8e55-2ada1d4ef210
	I1109 10:31:30.543738   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:30.543742   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:30.543749   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:30.543756   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:30.543895   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:31.037796   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:31.037818   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:31.037831   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:31.037841   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:31.041171   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:31.041186   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:31.041194   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:31.041200   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:31.041206   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:31.041214   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:31 GMT
	I1109 10:31:31.041223   29322 round_trippers.go:580]     Audit-Id: cf62655f-f1cd-47b2-ad2f-6276ad98caad
	I1109 10:31:31.041229   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:31.041287   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:31.041628   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:31.041635   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:31.041641   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:31.041661   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:31.043599   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:31.043609   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:31.043614   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:31.043619   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:31 GMT
	I1109 10:31:31.043624   29322 round_trippers.go:580]     Audit-Id: 9a1c16ba-10af-4889-a6d7-52b4fbd870f6
	I1109 10:31:31.043628   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:31.043633   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:31.043638   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:31.043671   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:31.537031   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:31.537058   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:31.537071   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:31.537081   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:31.540881   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:31.540896   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:31.540904   29322 round_trippers.go:580]     Audit-Id: 0bb11d33-096e-421b-85a3-11a741bf646d
	I1109 10:31:31.540911   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:31.540918   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:31.540925   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:31.540931   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:31.540937   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:31 GMT
	I1109 10:31:31.541006   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:31.541286   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:31.541293   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:31.541299   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:31.541304   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:31.543027   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:31.543035   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:31.543040   29322 round_trippers.go:580]     Audit-Id: 3b1e896f-8900-47b3-8366-e1b34cbd4d42
	I1109 10:31:31.543045   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:31.543051   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:31.543055   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:31.543060   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:31.543064   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:31 GMT
	I1109 10:31:31.543100   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:32.036820   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:32.036847   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:32.036859   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:32.036870   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:32.040491   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:32.040507   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:32.040514   29322 round_trippers.go:580]     Audit-Id: b259d728-ddeb-460f-82a5-85054765e0fb
	I1109 10:31:32.040521   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:32.040527   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:32.040534   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:32.040540   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:32.040546   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:32 GMT
	I1109 10:31:32.040881   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:32.041200   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:32.041207   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:32.041213   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:32.041219   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:32.043177   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:32.043186   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:32.043191   29322 round_trippers.go:580]     Audit-Id: 06a54d23-c67b-4a2f-9227-6f2690d09d47
	I1109 10:31:32.043196   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:32.043201   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:32.043206   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:32.043211   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:32.043215   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:32 GMT
	I1109 10:31:32.043338   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:32.536965   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:32.536986   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:32.536999   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:32.537009   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:32.542212   29322 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1109 10:31:32.542224   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:32.542231   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:32.542236   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:32 GMT
	I1109 10:31:32.542241   29322 round_trippers.go:580]     Audit-Id: 144227e5-fff2-49c0-888b-6fb6409f7aff
	I1109 10:31:32.542245   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:32.542250   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:32.542255   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:32.542305   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:32.542589   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:32.542595   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:32.542601   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:32.542606   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:32.544461   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:32.544471   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:32.544476   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:32.544481   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:32.544486   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:32 GMT
	I1109 10:31:32.544491   29322 round_trippers.go:580]     Audit-Id: 57a3641d-adfe-45dd-8d7d-599c12f2b8fb
	I1109 10:31:32.544497   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:32.544502   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:32.544541   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:32.544715   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:33.036832   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:33.036854   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:33.036867   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:33.036877   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:33.040718   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:33.040729   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:33.040734   29322 round_trippers.go:580]     Audit-Id: d2c1634f-e3a9-4f66-85f3-0d4ee0d03c64
	I1109 10:31:33.040739   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:33.040744   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:33.040749   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:33.040753   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:33.040758   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:33 GMT
	I1109 10:31:33.040818   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:33.041104   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:33.041111   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:33.041117   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:33.041122   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:33.042936   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:33.042946   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:33.042951   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:33.042956   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:33.042961   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:33 GMT
	I1109 10:31:33.042966   29322 round_trippers.go:580]     Audit-Id: 18bc118c-f002-4ece-92fa-eca8429200dd
	I1109 10:31:33.042971   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:33.042981   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:33.043018   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:33.536863   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:33.558588   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:33.558616   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:33.558630   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:33.562328   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:33.562343   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:33.562351   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:33 GMT
	I1109 10:31:33.562357   29322 round_trippers.go:580]     Audit-Id: c6860baf-a3f4-4414-b021-e43498a7de3d
	I1109 10:31:33.562365   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:33.562373   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:33.562381   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:33.562388   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:33.562765   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:33.563145   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:33.563154   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:33.563162   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:33.563206   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:33.565137   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:33.565145   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:33.565150   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:33.565154   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:33.565159   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:33.565164   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:33.565169   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:33 GMT
	I1109 10:31:33.565174   29322 round_trippers.go:580]     Audit-Id: 2da252fc-8036-4445-8ce3-aaf94817a633
	I1109 10:31:33.565209   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:34.036842   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:34.036864   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:34.036876   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:34.036885   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:34.039979   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:34.040008   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:34.040014   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:34 GMT
	I1109 10:31:34.040018   29322 round_trippers.go:580]     Audit-Id: a0b92930-5c69-4b0c-b0e8-552bfa5d1c3b
	I1109 10:31:34.040023   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:34.040027   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:34.040031   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:34.040036   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:34.040116   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:34.040390   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:34.040397   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:34.040402   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:34.040407   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:34.042252   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:34.042263   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:34.042270   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:34.042275   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:34 GMT
	I1109 10:31:34.042280   29322 round_trippers.go:580]     Audit-Id: f02a2745-067a-46d6-9c8d-1866720b0e16
	I1109 10:31:34.042286   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:34.042290   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:34.042295   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:34.042580   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:34.536995   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:34.537018   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:34.537032   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:34.537042   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:34.541234   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:34.541264   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:34.541272   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:34 GMT
	I1109 10:31:34.541277   29322 round_trippers.go:580]     Audit-Id: 841b494d-695c-4857-bf89-35c9cb669b9b
	I1109 10:31:34.541281   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:34.541286   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:34.541290   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:34.541294   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:34.541343   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:34.541642   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:34.541648   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:34.541654   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:34.541659   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:34.543553   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:34.543563   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:34.543568   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:34.543573   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:34 GMT
	I1109 10:31:34.543578   29322 round_trippers.go:580]     Audit-Id: b95b4155-eb9b-48aa-b56c-941da76c3d94
	I1109 10:31:34.543583   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:34.543588   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:34.543592   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:34.543919   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:35.037027   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:35.037049   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:35.037061   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:35.037071   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:35.041491   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:35.041514   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:35.041523   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:35.041531   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:35.041538   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:35.041545   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:35.041558   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:35 GMT
	I1109 10:31:35.041567   29322 round_trippers.go:580]     Audit-Id: 001817f9-9d58-4c99-8433-5384df9eade4
	I1109 10:31:35.041638   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:35.042065   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:35.042075   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:35.042084   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:35.042092   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:35.044392   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:35.044403   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:35.044408   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:35.044413   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:35.044418   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:35.044422   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:35.044427   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:35 GMT
	I1109 10:31:35.044432   29322 round_trippers.go:580]     Audit-Id: dd070cce-3b90-4546-a619-cc73a4d7f0f4
	I1109 10:31:35.044681   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:35.044869   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:35.536750   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:35.536771   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:35.536783   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:35.536793   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:35.540153   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:35.540172   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:35.540183   29322 round_trippers.go:580]     Audit-Id: 8a45ef4b-faf2-4baf-a63d-56fc7f9ff144
	I1109 10:31:35.540192   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:35.540202   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:35.540208   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:35.540214   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:35.540220   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:35 GMT
	I1109 10:31:35.540363   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:35.540739   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:35.540747   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:35.540755   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:35.540762   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:35.542801   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:35.542811   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:35.542817   29322 round_trippers.go:580]     Audit-Id: 6e4c511d-4457-4f2f-ba7b-344770a1b8b4
	I1109 10:31:35.542822   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:35.542826   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:35.542831   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:35.542836   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:35.542840   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:35 GMT
	I1109 10:31:35.543126   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:36.038073   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:36.038097   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:36.038111   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:36.038121   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:36.042124   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:36.042139   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:36.042147   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:36.042155   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:36 GMT
	I1109 10:31:36.042161   29322 round_trippers.go:580]     Audit-Id: 3225db78-f63b-495a-8e85-97774f4283e0
	I1109 10:31:36.042168   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:36.042174   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:36.042181   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:36.042258   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:36.042583   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:36.042589   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:36.042595   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:36.042602   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:36.044167   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:36.044177   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:36.044183   29322 round_trippers.go:580]     Audit-Id: f5a232db-25ad-4daf-b1dc-f733b9de4f1c
	I1109 10:31:36.044208   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:36.044218   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:36.044224   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:36.044230   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:36.044238   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:36 GMT
	I1109 10:31:36.044526   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:36.537197   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:36.537219   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:36.537232   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:36.537242   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:36.541001   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:36.541014   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:36.541020   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:36.541024   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:36.541029   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:36.541034   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:36.541038   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:36 GMT
	I1109 10:31:36.541043   29322 round_trippers.go:580]     Audit-Id: e38355c9-478e-401f-87c4-51d3b0afb5d5
	I1109 10:31:36.541134   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:36.541420   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:36.541426   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:36.541432   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:36.541437   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:36.543152   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:36.543169   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:36.543175   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:36.543180   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:36.543185   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:36.543191   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:36.543196   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:36 GMT
	I1109 10:31:36.543200   29322 round_trippers.go:580]     Audit-Id: b4cfe0e1-28b8-46e0-8cde-6b2967662db8
	I1109 10:31:36.543397   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:37.036890   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:37.036912   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:37.036925   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:37.036935   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:37.040461   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:37.040474   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:37.040480   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:37.040486   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:37.040492   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:37.040501   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:37.040509   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:37 GMT
	I1109 10:31:37.040517   29322 round_trippers.go:580]     Audit-Id: 1a40889a-a620-4802-9602-3cde5ecfb9d5
	I1109 10:31:37.040597   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:37.040893   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:37.040901   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:37.040907   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:37.040912   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:37.042756   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:37.042766   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:37.042772   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:37.042778   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:37.042783   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:37.042788   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:37 GMT
	I1109 10:31:37.042793   29322 round_trippers.go:580]     Audit-Id: 07c5b8e0-6c1e-4379-8556-d834a7a060f3
	I1109 10:31:37.042798   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:37.042842   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:37.537762   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:37.537785   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:37.537797   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:37.537807   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:37.541980   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:37.542019   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:37.542030   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:37 GMT
	I1109 10:31:37.542039   29322 round_trippers.go:580]     Audit-Id: f7b0f9b4-f8a5-400f-9ca5-bc9f05fc64a8
	I1109 10:31:37.542053   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:37.542061   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:37.542069   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:37.542081   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:37.542213   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:37.542919   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:37.542930   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:37.542937   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:37.542945   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:37.544922   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:37.544933   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:37.544939   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:37 GMT
	I1109 10:31:37.544944   29322 round_trippers.go:580]     Audit-Id: ee5f6cb4-c8dd-4d0f-a803-c9eb24ae12b8
	I1109 10:31:37.544950   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:37.544955   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:37.544960   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:37.544965   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:37.545291   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:37.545466   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:38.036926   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:38.036952   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:38.036964   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:38.036974   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:38.040779   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:38.040797   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:38.040805   29322 round_trippers.go:580]     Audit-Id: b9d2efcb-5089-48bc-bbf1-62e551743e2b
	I1109 10:31:38.040813   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:38.040820   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:38.040827   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:38.040834   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:38.040841   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:38 GMT
	I1109 10:31:38.040921   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:38.041308   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:38.041318   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:38.041326   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:38.041334   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:38.043463   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:38.043473   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:38.043479   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:38.043484   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:38.043489   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:38.043494   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:38.043499   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:38 GMT
	I1109 10:31:38.043504   29322 round_trippers.go:580]     Audit-Id: 4d8162ae-4c2c-439a-81bd-de82015be9e5
	I1109 10:31:38.043548   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:38.537356   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:38.559065   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:38.559083   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:38.559097   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:38.563093   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:38.563111   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:38.563119   29322 round_trippers.go:580]     Audit-Id: 60139ff2-603f-4cd0-98af-f8a35de4d921
	I1109 10:31:38.563146   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:38.563157   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:38.563164   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:38.563171   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:38.563178   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:38 GMT
	I1109 10:31:38.563256   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:38.563616   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:38.563623   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:38.563628   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:38.563634   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:38.565482   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:38.565493   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:38.565507   29322 round_trippers.go:580]     Audit-Id: db902ccb-d24e-4ba8-977b-2746ec39c137
	I1109 10:31:38.565513   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:38.565518   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:38.565523   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:38.565528   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:38.565533   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:38 GMT
	I1109 10:31:38.565582   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:39.036655   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:39.036724   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:39.036738   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:39.036752   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:39.039792   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:39.039807   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:39.039815   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:39.039823   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:39.039829   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:39.039840   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:39 GMT
	I1109 10:31:39.039848   29322 round_trippers.go:580]     Audit-Id: 0934efc0-2cfa-40a6-9fbd-34653c9d6076
	I1109 10:31:39.039854   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:39.039914   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:39.040194   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:39.040201   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:39.040207   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:39.040225   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:39.041773   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:39.041783   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:39.041789   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:39.041794   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:39.041799   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:39 GMT
	I1109 10:31:39.041805   29322 round_trippers.go:580]     Audit-Id: 1d4585c6-060c-4ea6-9351-27c84b8cd999
	I1109 10:31:39.041809   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:39.041814   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:39.041856   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:39.537333   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:39.537359   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:39.537399   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:39.537423   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:39.541535   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:39.541551   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:39.541559   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:39.541565   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:39 GMT
	I1109 10:31:39.541573   29322 round_trippers.go:580]     Audit-Id: 7b8aa59d-c7d2-421f-a1bd-230da0a63aa1
	I1109 10:31:39.541580   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:39.541587   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:39.541594   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:39.541674   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:39.542049   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:39.542058   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:39.542067   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:39.542088   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:39.544007   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:39.544016   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:39.544021   29322 round_trippers.go:580]     Audit-Id: a476bdd8-cd98-47d3-86d4-2cb557cfc75e
	I1109 10:31:39.544031   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:39.544036   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:39.544040   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:39.544045   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:39.544050   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:39 GMT
	I1109 10:31:39.544093   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:40.036879   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:40.036905   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:40.036918   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:40.036928   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:40.040896   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:40.040926   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:40.040933   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:40.040938   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:40 GMT
	I1109 10:31:40.040943   29322 round_trippers.go:580]     Audit-Id: 55224fad-ba88-4c81-85e6-44fd80bde8c7
	I1109 10:31:40.040949   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:40.040956   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:40.040963   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:40.041023   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:40.041337   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:40.041344   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:40.041350   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:40.041363   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:40.043078   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:40.043090   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:40.043098   29322 round_trippers.go:580]     Audit-Id: 40532329-031d-44e1-a10d-18a75b03266b
	I1109 10:31:40.043105   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:40.043110   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:40.043116   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:40.043120   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:40.043125   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:40 GMT
	I1109 10:31:40.043307   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:40.043484   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:40.536695   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:40.536721   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:40.536734   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:40.536743   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:40.540575   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:40.540590   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:40.540598   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:40.540604   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:40.540611   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:40 GMT
	I1109 10:31:40.540617   29322 round_trippers.go:580]     Audit-Id: f9a0b569-78e2-4176-8d0e-b9eb917d61a2
	I1109 10:31:40.540624   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:40.540631   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:40.540706   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:40.541035   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:40.541041   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:40.541047   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:40.541053   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:40.542755   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:40.542765   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:40.542770   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:40 GMT
	I1109 10:31:40.542775   29322 round_trippers.go:580]     Audit-Id: 4f0b38e6-dbd4-41a1-9cea-8d76268b4f15
	I1109 10:31:40.542779   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:40.542784   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:40.542789   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:40.542794   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:40.542904   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:41.036572   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:41.036599   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:41.036612   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:41.036622   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:41.039850   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:41.039859   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:41.039865   29322 round_trippers.go:580]     Audit-Id: 2fc7dba9-2a1c-49cc-af1f-87ee7d99d8ef
	I1109 10:31:41.039869   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:41.039876   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:41.039881   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:41.039886   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:41.039891   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:41 GMT
	I1109 10:31:41.040234   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:41.040516   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:41.040522   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:41.040528   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:41.040533   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:41.042672   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:41.042681   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:41.042686   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:41.042691   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:41.042696   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:41.042701   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:41.042705   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:41 GMT
	I1109 10:31:41.042712   29322 round_trippers.go:580]     Audit-Id: cb93996e-c5df-49ef-893a-d79ca72b5817
	I1109 10:31:41.043106   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:41.537112   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:41.537139   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:41.537152   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:41.537163   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:41.541243   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:41.541259   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:41.541267   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:41.541274   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:41 GMT
	I1109 10:31:41.541280   29322 round_trippers.go:580]     Audit-Id: cb9bf68d-4a84-4506-baeb-8736ddf3eed7
	I1109 10:31:41.541286   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:41.541337   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:41.541392   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:41.541484   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:41.541780   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:41.541787   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:41.541792   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:41.541797   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:41.543543   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:41.543553   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:41.543558   29322 round_trippers.go:580]     Audit-Id: 97e107ae-0215-47eb-a916-81340c90091e
	I1109 10:31:41.543563   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:41.543568   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:41.543573   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:41.543579   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:41.543586   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:41 GMT
	I1109 10:31:41.543706   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:42.036869   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:42.036888   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:42.036916   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:42.036926   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:42.039863   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:42.039873   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:42.039878   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:42.039884   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:42.039889   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:42.039894   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:42 GMT
	I1109 10:31:42.039899   29322 round_trippers.go:580]     Audit-Id: 5adec263-61c7-437d-9c8a-83d6b940f7c8
	I1109 10:31:42.039904   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:42.039964   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:42.040242   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:42.040249   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:42.040254   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:42.040259   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:42.041995   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:42.042005   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:42.042011   29322 round_trippers.go:580]     Audit-Id: 6e650d22-26ea-4ff9-a062-8a73a56c448d
	I1109 10:31:42.042016   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:42.042021   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:42.042028   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:42.042033   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:42.042038   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:42 GMT
	I1109 10:31:42.042088   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:42.537409   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:42.537432   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:42.537444   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:42.537454   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:42.540819   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:42.540833   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:42.540841   29322 round_trippers.go:580]     Audit-Id: 899ee5ac-bba4-4e45-9020-4382b9c55cdd
	I1109 10:31:42.540848   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:42.540854   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:42.540861   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:42.540867   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:42.540874   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:42 GMT
	I1109 10:31:42.541162   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:42.541529   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:42.541536   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:42.541543   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:42.541548   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:42.543405   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:42.543415   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:42.543421   29322 round_trippers.go:580]     Audit-Id: 3ba71663-9587-4d56-af1a-44dc6f26c009
	I1109 10:31:42.543426   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:42.543431   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:42.543435   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:42.543440   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:42.543446   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:42 GMT
	I1109 10:31:42.543499   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:42.543678   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:43.036823   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:43.036850   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:43.036862   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:43.036872   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:43.040633   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:43.040653   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:43.040661   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:43 GMT
	I1109 10:31:43.040668   29322 round_trippers.go:580]     Audit-Id: 4b0ad16c-19d6-4ab0-becd-e0090044785e
	I1109 10:31:43.040676   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:43.040683   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:43.040692   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:43.040701   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:43.040863   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:43.041255   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:43.041262   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:43.041268   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:43.041273   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:43.043084   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:43.043093   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:43.043099   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:43.043104   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:43.043109   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:43.043113   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:43 GMT
	I1109 10:31:43.043118   29322 round_trippers.go:580]     Audit-Id: d1986fec-e18b-4445-8a07-cb8e51765ffc
	I1109 10:31:43.043123   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:43.043172   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:43.536483   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:43.557331   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:43.557376   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:43.557391   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:43.561490   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:43.561505   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:43.561515   29322 round_trippers.go:580]     Audit-Id: 6fa30a43-0c2a-41a3-911f-c35ad411a163
	I1109 10:31:43.561522   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:43.561529   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:43.561535   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:43.561543   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:43.561549   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:43 GMT
	I1109 10:31:43.561644   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:43.561935   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:43.561941   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:43.561947   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:43.561953   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:43.563728   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:43.563737   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:43.563742   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:43.563747   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:43 GMT
	I1109 10:31:43.563752   29322 round_trippers.go:580]     Audit-Id: 9a9558e2-5cce-476e-b0c2-5cbb9063f38a
	I1109 10:31:43.563757   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:43.563763   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:43.563768   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:43.563805   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:44.036574   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:44.036602   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:44.036650   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:44.036663   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:44.040346   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:44.040359   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:44.040367   29322 round_trippers.go:580]     Audit-Id: 5222d96d-fa3d-4863-9a8b-19ac31674994
	I1109 10:31:44.040376   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:44.040384   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:44.040391   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:44.040398   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:44.040404   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:44 GMT
	I1109 10:31:44.040468   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:44.040751   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:44.040758   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:44.040764   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:44.040770   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:44.042553   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:44.042564   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:44.042570   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:44.042575   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:44.042580   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:44.042584   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:44.042589   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:44 GMT
	I1109 10:31:44.042595   29322 round_trippers.go:580]     Audit-Id: 02691973-6c37-4452-97f3-b4b2a9f58304
	I1109 10:31:44.042743   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:44.537649   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:44.537671   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:44.537684   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:44.537695   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:44.541874   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:44.541887   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:44.541897   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:44 GMT
	I1109 10:31:44.541903   29322 round_trippers.go:580]     Audit-Id: 621b0cbd-d244-4066-8d4c-52420d442952
	I1109 10:31:44.541911   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:44.541917   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:44.541923   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:44.541932   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:44.541992   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:44.542306   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:44.542313   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:44.542319   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:44.542324   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:44.543942   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:44.543950   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:44.543955   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:44.543962   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:44 GMT
	I1109 10:31:44.543968   29322 round_trippers.go:580]     Audit-Id: 7670d194-b614-4ae5-a540-8c0b9d06f403
	I1109 10:31:44.543973   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:44.543978   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:44.543982   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:44.544361   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:44.544540   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:45.036672   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:45.036696   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:45.036708   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:45.036718   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:45.039821   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:45.039831   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:45.039840   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:45.039848   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:45 GMT
	I1109 10:31:45.039853   29322 round_trippers.go:580]     Audit-Id: 2706f725-0f50-4b9c-83b8-cdc2cc406a79
	I1109 10:31:45.039857   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:45.039862   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:45.039867   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:45.039930   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:45.040215   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:45.040222   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:45.040228   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:45.040233   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:45.042282   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:45.042291   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:45.042296   29322 round_trippers.go:580]     Audit-Id: 82dbfdb5-db39-4d0d-9e6b-0a58d91edc63
	I1109 10:31:45.042301   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:45.042307   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:45.042316   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:45.042322   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:45.042326   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:45 GMT
	I1109 10:31:45.042364   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:45.538349   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:45.538375   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:45.538388   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:45.538398   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:45.542063   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:45.542078   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:45.542085   29322 round_trippers.go:580]     Audit-Id: 20db784e-e085-4a3a-be3c-f5c5c001140a
	I1109 10:31:45.542092   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:45.542099   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:45.542105   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:45.542112   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:45.542118   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:45 GMT
	I1109 10:31:45.542179   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:45.542481   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:45.542487   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:45.542493   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:45.542498   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:45.544375   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:45.544386   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:45.544392   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:45.544397   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:45 GMT
	I1109 10:31:45.544401   29322 round_trippers.go:580]     Audit-Id: 4bc65b0c-05aa-44aa-9194-694dce513a02
	I1109 10:31:45.544406   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:45.544411   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:45.544416   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:45.544449   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:46.036916   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:46.036943   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:46.036956   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:46.036966   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:46.040743   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:46.040759   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:46.040766   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:46.040780   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:46.040788   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:46 GMT
	I1109 10:31:46.040795   29322 round_trippers.go:580]     Audit-Id: bb7c85cf-10eb-41a2-a5b4-e173cb5547ab
	I1109 10:31:46.040801   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:46.040810   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:46.040869   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:46.041250   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:46.041257   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:46.041263   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:46.041269   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:46.043115   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:46.043124   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:46.043130   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:46.043135   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:46 GMT
	I1109 10:31:46.043140   29322 round_trippers.go:580]     Audit-Id: 52ce5708-8363-4304-bf71-0338ac2165b6
	I1109 10:31:46.043145   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:46.043150   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:46.043154   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:46.043189   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:46.538442   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:46.538464   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:46.538481   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:46.538492   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:46.542151   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:46.542168   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:46.542176   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:46.542184   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:46 GMT
	I1109 10:31:46.542211   29322 round_trippers.go:580]     Audit-Id: e168128b-3cbf-424a-99af-5849882aa0f5
	I1109 10:31:46.542223   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:46.542230   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:46.542243   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:46.542503   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:46.542867   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:46.542873   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:46.542879   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:46.542885   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:46.545015   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:46.545025   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:46.545031   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:46.545036   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:46.545041   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:46.545045   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:46.545049   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:46 GMT
	I1109 10:31:46.545054   29322 round_trippers.go:580]     Audit-Id: 1b406548-09f0-4549-8ea3-e7b0756e2b07
	I1109 10:31:46.545089   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:46.545297   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:47.037254   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:47.037282   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:47.037333   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:47.037346   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:47.040984   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:47.041000   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:47.041008   29322 round_trippers.go:580]     Audit-Id: 49c5d17f-4e21-4678-89a0-30bbdeb09aef
	I1109 10:31:47.041014   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:47.041021   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:47.041027   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:47.041035   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:47.041041   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:47 GMT
	I1109 10:31:47.041433   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:47.041714   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:47.041722   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:47.041728   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:47.041733   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:47.043592   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:47.043601   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:47.043606   29322 round_trippers.go:580]     Audit-Id: 68b7d886-9777-4bf1-9a47-bd0eb57a0bbc
	I1109 10:31:47.043611   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:47.043617   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:47.043621   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:47.043626   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:47.043631   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:47 GMT
	I1109 10:31:47.043665   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:47.536538   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:47.536561   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:47.536573   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:47.536583   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:47.540233   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:47.540249   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:47.540260   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:47.540269   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:47.540281   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:47 GMT
	I1109 10:31:47.540291   29322 round_trippers.go:580]     Audit-Id: 9960ada3-e31d-4ad9-8d08-6d76acc22053
	I1109 10:31:47.540301   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:47.540313   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:47.540392   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:47.540687   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:47.540695   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:47.540701   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:47.540707   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:47.542667   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:47.542678   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:47.542684   29322 round_trippers.go:580]     Audit-Id: 7b43c254-7046-4909-b723-b066ae036071
	I1109 10:31:47.542689   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:47.542695   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:47.542700   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:47.542705   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:47.542710   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:47 GMT
	I1109 10:31:47.542743   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:48.036562   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:48.036589   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:48.036601   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:48.036611   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:48.040071   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:48.040089   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:48.040099   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:48 GMT
	I1109 10:31:48.040106   29322 round_trippers.go:580]     Audit-Id: a93b1c13-e8c1-482b-a7f2-2c3dce2bd92e
	I1109 10:31:48.040112   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:48.040118   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:48.040125   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:48.040133   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:48.040422   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:48.040701   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:48.040709   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:48.040715   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:48.040720   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:48.042596   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:48.042606   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:48.042611   29322 round_trippers.go:580]     Audit-Id: f0fb3d1b-c2ab-4de7-9add-e6536466d99b
	I1109 10:31:48.042616   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:48.042621   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:48.042625   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:48.042630   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:48.042635   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:48 GMT
	I1109 10:31:48.042671   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:48.536840   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:48.559516   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:48.559527   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:48.559541   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:48.562439   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:48.562450   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:48.562456   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:48.562460   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:48.562466   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:48 GMT
	I1109 10:31:48.562470   29322 round_trippers.go:580]     Audit-Id: 866c18cd-33f4-492b-a91a-67eda5b68284
	I1109 10:31:48.562475   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:48.562479   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:48.562526   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:48.562807   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:48.562813   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:48.562819   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:48.562825   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:48.564641   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:48.564650   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:48.564655   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:48.564660   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:48.564665   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:48 GMT
	I1109 10:31:48.564671   29322 round_trippers.go:580]     Audit-Id: 56e50d0a-24fd-46c0-be60-76c519e69a6c
	I1109 10:31:48.564676   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:48.564681   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:48.564713   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:48.564889   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:49.036383   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:49.036462   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.036474   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.036484   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.039722   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:49.039734   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.039745   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.039755   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.039761   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.039765   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.039771   29322 round_trippers.go:580]     Audit-Id: ff1e9d97-77bd-4aa5-924a-358b661f0b01
	I1109 10:31:49.039775   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.039979   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1073","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6553 chars]
	I1109 10:31:49.040259   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:49.040266   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.040272   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.040277   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.042694   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:49.042704   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.042710   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.042716   29322 round_trippers.go:580]     Audit-Id: 4d2b433b-dd92-41c0-9a61-4323ed1e9045
	I1109 10:31:49.042721   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.042727   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.042731   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.042736   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.042774   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:49.042951   29322 pod_ready.go:92] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.042962   29322 pod_ready.go:81] duration metric: took 38.511815615s waiting for pod "coredns-565d847f94-fx6lt" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.042970   29322 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.042997   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/etcd-multinode-102528
	I1109 10:31:49.043001   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.043008   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.043014   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.044825   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.044833   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.044844   29322 round_trippers.go:580]     Audit-Id: cdfff0ca-5d2a-49f1-9cb2-42f9c7f7208c
	I1109 10:31:49.044851   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.044856   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.044862   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.044870   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.044877   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.045039   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102528","namespace":"kube-system","uid":"5dde8340-2916-4da6-91aa-ea6dfe24a5ad","resourceVersion":"1041","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.mirror":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.seen":"2022-11-09T18:25:54.343403314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6046 chars]
	I1109 10:31:49.045263   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:49.045270   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.045276   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.045282   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.047218   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.047227   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.047232   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.047237   29322 round_trippers.go:580]     Audit-Id: a747d2d2-59d5-4804-98ef-b75ef054f903
	I1109 10:31:49.047244   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.047253   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.047259   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.047270   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.047475   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:49.047643   29322 pod_ready.go:92] pod "etcd-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.047650   29322 pod_ready.go:81] duration metric: took 4.674233ms waiting for pod "etcd-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.047659   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.047683   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102528
	I1109 10:31:49.047687   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.047693   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.047699   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.049519   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.049531   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.049536   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.049541   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.049546   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.049550   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.049556   29322 round_trippers.go:580]     Audit-Id: d69b7d01-a93c-493e-b527-40eb3945b564
	I1109 10:31:49.049563   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.049730   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-102528","namespace":"kube-system","uid":"f48fa313-e8ec-42bc-87bc-7daede794fe2","resourceVersion":"1029","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b110864cd1ed66678c31ad09d14c41ec","kubernetes.io/config.mirror":"b110864cd1ed66678c31ad09d14c41ec","kubernetes.io/config.seen":"2022-11-09T18:25:54.343403906Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8428 chars]
	I1109 10:31:49.049984   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:49.049991   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.049997   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.050003   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.051638   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.051646   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.051651   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.051656   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.051661   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.051665   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.051671   29322 round_trippers.go:580]     Audit-Id: 271b4e64-19ae-4b51-88c7-7571edbafde1
	I1109 10:31:49.051675   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.051888   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:49.052066   29322 pod_ready.go:92] pod "kube-apiserver-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.052072   29322 pod_ready.go:81] duration metric: took 4.408129ms waiting for pod "kube-apiserver-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.052078   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.052110   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102528
	I1109 10:31:49.052116   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.052122   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.052127   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.054137   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:49.054146   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.054151   29322 round_trippers.go:580]     Audit-Id: e4b150df-5b2e-4f43-b903-08847b9eae86
	I1109 10:31:49.054156   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.054161   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.054165   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.054170   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.054175   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.054306   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-102528","namespace":"kube-system","uid":"3dd056ba-22b5-4b0c-aa7e-9e00d215df9a","resourceVersion":"1035","creationTimestamp":"2022-11-09T18:25:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ec9e561364ffe02db1e38ab82ddc699b","kubernetes.io/config.mirror":"ec9e561364ffe02db1e38ab82ddc699b","kubernetes.io/config.seen":"2022-11-09T18:25:43.900701692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8005 chars]
	I1109 10:31:49.054552   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:49.054559   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.054565   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.054570   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.056172   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.056180   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.056185   29322 round_trippers.go:580]     Audit-Id: dafc6e5f-b537-4c60-bdff-60ca3bf3983d
	I1109 10:31:49.056190   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.056194   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.056199   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.056203   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.056208   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.056369   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:49.056537   29322 pod_ready.go:92] pod "kube-controller-manager-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.056544   29322 pod_ready.go:81] duration metric: took 4.461605ms waiting for pod "kube-controller-manager-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.056551   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9wsxp" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.056575   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-9wsxp
	I1109 10:31:49.056580   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.056586   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.056591   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.058233   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.058241   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.058246   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.058251   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.058256   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.058261   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.058266   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.058271   29322 round_trippers.go:580]     Audit-Id: 055832f8-1e45-4830-8e2f-3942f90d38d2
	I1109 10:31:49.058436   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9wsxp","generateName":"kube-proxy-","namespace":"kube-system","uid":"03c6822b-9fef-4fa3-82a3-bb5082cf31b3","resourceVersion":"1023","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I1109 10:31:49.058660   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:49.058666   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.058672   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.058678   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.060203   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.060211   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.060216   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.060220   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.060226   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.060230   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.060236   29322 round_trippers.go:580]     Audit-Id: 040b3107-c5e1-4000-91fe-dd3f869b3cad
	I1109 10:31:49.060240   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.060270   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:49.060433   29322 pod_ready.go:92] pod "kube-proxy-9wsxp" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.060439   29322 pod_ready.go:81] duration metric: took 3.883433ms waiting for pod "kube-proxy-9wsxp" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.060444   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4lh6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.237831   29322 request.go:614] Waited for 177.283934ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-c4lh6
	I1109 10:31:49.237880   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-c4lh6
	I1109 10:31:49.237888   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.237900   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.237911   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.241885   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:49.241902   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.241911   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.241920   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.241930   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.241938   29322 round_trippers.go:580]     Audit-Id: c00fee46-7caa-4ebc-8618-e170b21456bb
	I1109 10:31:49.241947   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.241955   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.242016   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4lh6","generateName":"kube-proxy-","namespace":"kube-system","uid":"e9055586-6022-464a-acdd-6fce3c87392b","resourceVersion":"845","creationTimestamp":"2022-11-09T18:26:28Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1109 10:31:49.438467   29322 request.go:614] Waited for 196.086951ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m02
	I1109 10:31:49.438566   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m02
	I1109 10:31:49.438578   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.438605   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.438617   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.443217   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:49.443228   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.443234   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.443239   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.443243   29322 round_trippers.go:580]     Audit-Id: 1eb58aad-1026-4caa-a770-3d00568a3c5d
	I1109 10:31:49.443248   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.443253   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.443258   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.443315   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528-m02","uid":"e1542fe1-dc88-406c-b080-a5120e5abea2","resourceVersion":"857","creationTimestamp":"2022-11-09T18:29:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:29:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:29:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4536 chars]
	I1109 10:31:49.443488   29322 pod_ready.go:92] pod "kube-proxy-c4lh6" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.443494   29322 pod_ready.go:81] duration metric: took 383.055188ms waiting for pod "kube-proxy-c4lh6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.443501   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kh6r6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.637295   29322 request.go:614] Waited for 193.726412ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-kh6r6
	I1109 10:31:49.637350   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-kh6r6
	I1109 10:31:49.637358   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.637370   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.637380   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.641289   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:49.641304   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.641312   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.641318   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.641325   29322 round_trippers.go:580]     Audit-Id: 5d59c0a2-04e2-4809-921e-06e44e8d71a5
	I1109 10:31:49.641331   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.641337   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.641343   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.641585   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kh6r6","generateName":"kube-proxy-","namespace":"kube-system","uid":"de2bad4b-35b4-4537-a6a3-7acd77c63e69","resourceVersion":"925","creationTimestamp":"2022-11-09T18:27:09Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:27:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1109 10:31:49.837062   29322 request.go:614] Waited for 195.160582ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m03
	I1109 10:31:49.837127   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m03
	I1109 10:31:49.837135   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.837144   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.837151   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.839993   29322 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1109 10:31:49.840003   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.840010   29322 round_trippers.go:580]     Audit-Id: d772f876-f389-4d2b-bb46-c37a8e0fe4e7
	I1109 10:31:49.840015   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.840020   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.840025   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.840030   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.840034   29322 round_trippers.go:580]     Content-Length: 210
	I1109 10:31:49.840039   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.840051   29322 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-102528-m03\" not found","reason":"NotFound","details":{"name":"multinode-102528-m03","kind":"nodes"},"code":404}
	I1109 10:31:49.840162   29322 pod_ready.go:97] node "multinode-102528-m03" hosting pod "kube-proxy-kh6r6" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-102528-m03": nodes "multinode-102528-m03" not found
	I1109 10:31:49.840169   29322 pod_ready.go:81] duration metric: took 396.674176ms waiting for pod "kube-proxy-kh6r6" in "kube-system" namespace to be "Ready" ...
	E1109 10:31:49.840174   29322 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-102528-m03" hosting pod "kube-proxy-kh6r6" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-102528-m03": nodes "multinode-102528-m03" not found
	I1109 10:31:49.840179   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:50.037017   29322 request.go:614] Waited for 196.806176ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102528
	I1109 10:31:50.037072   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102528
	I1109 10:31:50.037080   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.037093   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.037134   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.040503   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:50.040516   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.040524   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.040530   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.040538   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.040544   29322 round_trippers.go:580]     Audit-Id: d2ee6724-57c2-4f45-8850-e4c5e803441a
	I1109 10:31:50.040551   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.040557   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.040638   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-102528","namespace":"kube-system","uid":"26dff845-4103-4884-86e3-42c37dc577c0","resourceVersion":"1014","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9865f6bce1997a307196ce89b4764fd5","kubernetes.io/config.mirror":"9865f6bce1997a307196ce89b4764fd5","kubernetes.io/config.seen":"2022-11-09T18:25:54.343402489Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1109 10:31:50.238512   29322 request.go:614] Waited for 197.480323ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:50.238561   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:50.238570   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.238582   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.238595   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.242460   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:50.242477   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.242484   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.242512   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.242525   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.242533   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.242539   29322 round_trippers.go:580]     Audit-Id: 667cbb3f-106d-4852-8c27-ba9970339ab6
	I1109 10:31:50.242545   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.242620   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:50.242872   29322 pod_ready.go:92] pod "kube-scheduler-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:50.242882   29322 pod_ready.go:81] duration metric: took 402.707749ms waiting for pod "kube-scheduler-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:50.242892   29322 pod_ready.go:38] duration metric: took 39.719867134s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 10:31:50.242910   29322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 10:31:50.250698   29322 command_runner.go:130] > -16
	I1109 10:31:50.250837   29322 ops.go:34] apiserver oom_adj: -16
	I1109 10:31:50.250846   29322 kubeadm.go:631] restartCluster took 56.8779672s
	I1109 10:31:50.250852   29322 kubeadm.go:398] StartCluster complete in 56.907883067s
	I1109 10:31:50.250864   29322 settings.go:142] acquiring lock: {Name:mke93232301b59b22d43a378e933baa222d3feda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:31:50.250958   29322 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:31:50.251326   29322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/kubeconfig: {Name:mk02bb1c68cad934afd737965b2dbda8f5a4ba2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:31:50.251925   29322 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:31:50.252087   29322 kapi.go:59] client config for multinode-102528: &rest.Config{Host:"https://127.0.0.1:62610", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:31:50.252296   29322 round_trippers.go:463] GET https://127.0.0.1:62610/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1109 10:31:50.252301   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.252309   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.252314   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.254498   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:50.254507   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.254512   29322 round_trippers.go:580]     Content-Length: 292
	I1109 10:31:50.254517   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.254523   29322 round_trippers.go:580]     Audit-Id: 4630ac4e-ce5d-49f4-8d66-e1fe6e225e49
	I1109 10:31:50.254527   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.254532   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.254537   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.254542   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.254552   29322 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2c5e384a-cc55-41eb-8931-c2c8d631848e","resourceVersion":"1077","creationTimestamp":"2022-11-09T18:25:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1109 10:31:50.254628   29322 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-102528" rescaled to 1
	I1109 10:31:50.254658   29322 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1109 10:31:50.254678   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 10:31:50.276859   29322 out.go:177] * Verifying Kubernetes components...
	I1109 10:31:50.254697   29322 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I1109 10:31:50.254837   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:31:50.317761   29322 addons.go:65] Setting storage-provisioner=true in profile "multinode-102528"
	I1109 10:31:50.317761   29322 addons.go:65] Setting default-storageclass=true in profile "multinode-102528"
	I1109 10:31:50.317773   29322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 10:31:50.317788   29322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-102528"
	I1109 10:31:50.317789   29322 addons.go:227] Setting addon storage-provisioner=true in "multinode-102528"
	W1109 10:31:50.317798   29322 addons.go:236] addon storage-provisioner should already be in state true
	I1109 10:31:50.317848   29322 host.go:66] Checking if "multinode-102528" exists ...
	I1109 10:31:50.318099   29322 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:31:50.318198   29322 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:31:50.339067   29322 command_runner.go:130] > apiVersion: v1
	I1109 10:31:50.339087   29322 command_runner.go:130] > data:
	I1109 10:31:50.339092   29322 command_runner.go:130] >   Corefile: |
	I1109 10:31:50.339098   29322 command_runner.go:130] >     .:53 {
	I1109 10:31:50.339103   29322 command_runner.go:130] >         errors
	I1109 10:31:50.339107   29322 command_runner.go:130] >         health {
	I1109 10:31:50.339112   29322 command_runner.go:130] >            lameduck 5s
	I1109 10:31:50.339125   29322 command_runner.go:130] >         }
	I1109 10:31:50.339132   29322 command_runner.go:130] >         ready
	I1109 10:31:50.339140   29322 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1109 10:31:50.339144   29322 command_runner.go:130] >            pods insecure
	I1109 10:31:50.339148   29322 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1109 10:31:50.339153   29322 command_runner.go:130] >            ttl 30
	I1109 10:31:50.339158   29322 command_runner.go:130] >         }
	I1109 10:31:50.339163   29322 command_runner.go:130] >         prometheus :9153
	I1109 10:31:50.339166   29322 command_runner.go:130] >         hosts {
	I1109 10:31:50.339171   29322 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I1109 10:31:50.339176   29322 command_runner.go:130] >            fallthrough
	I1109 10:31:50.339180   29322 command_runner.go:130] >         }
	I1109 10:31:50.339184   29322 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1109 10:31:50.339190   29322 command_runner.go:130] >            max_concurrent 1000
	I1109 10:31:50.339194   29322 command_runner.go:130] >         }
	I1109 10:31:50.339198   29322 command_runner.go:130] >         cache 30
	I1109 10:31:50.339204   29322 command_runner.go:130] >         loop
	I1109 10:31:50.339212   29322 command_runner.go:130] >         reload
	I1109 10:31:50.339218   29322 command_runner.go:130] >         loadbalance
	I1109 10:31:50.339222   29322 command_runner.go:130] >     }
	I1109 10:31:50.339227   29322 command_runner.go:130] > kind: ConfigMap
	I1109 10:31:50.339233   29322 command_runner.go:130] > metadata:
	I1109 10:31:50.339239   29322 command_runner.go:130] >   creationTimestamp: "2022-11-09T18:25:54Z"
	I1109 10:31:50.339250   29322 command_runner.go:130] >   name: coredns
	I1109 10:31:50.339255   29322 command_runner.go:130] >   namespace: kube-system
	I1109 10:31:50.339258   29322 command_runner.go:130] >   resourceVersion: "359"
	I1109 10:31:50.339269   29322 command_runner.go:130] >   uid: a7e5939f-cbe7-4fa9-af8b-b4745b0c1a3a
	I1109 10:31:50.343797   29322 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1109 10:31:50.344849   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:31:50.382978   29322 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:31:50.404388   29322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 10:31:50.404666   29322 kapi.go:59] client config for multinode-102528: &rest.Config{Host:"https://127.0.0.1:62610", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:31:50.425511   29322 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 10:31:50.425532   29322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 10:31:50.425684   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:31:50.426357   29322 round_trippers.go:463] GET https://127.0.0.1:62610/apis/storage.k8s.io/v1/storageclasses
	I1109 10:31:50.426731   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.426777   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.426791   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.430902   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:50.430924   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.430938   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.430967   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.430977   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.430981   29322 round_trippers.go:580]     Content-Length: 1274
	I1109 10:31:50.430986   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.430991   29322 round_trippers.go:580]     Audit-Id: 23bd0941-1af6-4d6e-b506-0f4281edc2cf
	I1109 10:31:50.430995   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.431040   29322 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"standard","uid":"f1f19485-8735-41b4-b256-141da52da440","resourceVersion":"373","creationTimestamp":"2022-11-09T18:26:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-09T18:26:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I1109 10:31:50.431494   29322 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f1f19485-8735-41b4-b256-141da52da440","resourceVersion":"373","creationTimestamp":"2022-11-09T18:26:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-09T18:26:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1109 10:31:50.431547   29322 round_trippers.go:463] PUT https://127.0.0.1:62610/apis/storage.k8s.io/v1/storageclasses/standard
	I1109 10:31:50.431553   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.431559   29322 round_trippers.go:473]     Content-Type: application/json
	I1109 10:31:50.431565   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.431570   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.434768   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:50.434781   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.434786   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.434791   29322 round_trippers.go:580]     Audit-Id: 2db95456-0cb5-48d2-bea9-3a04a1a2756f
	I1109 10:31:50.434795   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.434800   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.434805   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.434814   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.434827   29322 round_trippers.go:580]     Content-Length: 1220
	I1109 10:31:50.434950   29322 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f1f19485-8735-41b4-b256-141da52da440","resourceVersion":"373","creationTimestamp":"2022-11-09T18:26:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-09T18:26:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1109 10:31:50.435030   29322 addons.go:227] Setting addon default-storageclass=true in "multinode-102528"
	W1109 10:31:50.435038   29322 addons.go:236] addon default-storageclass should already be in state true
	I1109 10:31:50.435061   29322 host.go:66] Checking if "multinode-102528" exists ...
	I1109 10:31:50.435438   29322 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:31:50.436278   29322 node_ready.go:35] waiting up to 6m0s for node "multinode-102528" to be "Ready" ...
	I1109 10:31:50.436401   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:50.436411   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.436418   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.436423   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.439461   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:50.439476   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.439483   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.439487   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.439492   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.439496   29322 round_trippers.go:580]     Audit-Id: 6112e173-5909-4e2e-8dcb-b178751a3503
	I1109 10:31:50.439500   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.439504   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.439578   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:50.439839   29322 node_ready.go:49] node "multinode-102528" has status "Ready":"True"
	I1109 10:31:50.439846   29322 node_ready.go:38] duration metric: took 3.536044ms waiting for node "multinode-102528" to be "Ready" ...
	I1109 10:31:50.439858   29322 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 10:31:50.486912   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:31:50.493359   29322 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 10:31:50.493371   29322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 10:31:50.493458   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:31:50.551229   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:31:50.577025   29322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 10:31:50.636473   29322 request.go:614] Waited for 196.560259ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:50.636515   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:50.636521   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.636530   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.636537   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.640815   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:50.640845   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.640859   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.640876   29322 round_trippers.go:580]     Audit-Id: 64877145-f638-4637-8d9a-e8d0b998b412
	I1109 10:31:50.640892   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.640897   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.640915   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.640921   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.641915   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1073","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85374 chars]
	I1109 10:31:50.643894   29322 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-fx6lt" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:50.645672   29322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 10:31:50.737781   29322 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1109 10:31:50.739140   29322 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1109 10:31:50.740667   29322 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1109 10:31:50.742361   29322 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1109 10:31:50.743932   29322 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1109 10:31:50.750008   29322 command_runner.go:130] > pod/storage-provisioner configured
	I1109 10:31:50.836743   29322 request.go:614] Waited for 192.804333ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:50.836804   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:50.836810   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.836816   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.836823   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.839478   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:50.839493   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.839499   29322 round_trippers.go:580]     Audit-Id: 8e3b6e75-41b6-41d2-96e2-af040440a726
	I1109 10:31:50.839507   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.839513   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.839518   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.839523   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.839528   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.839608   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1073","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6553 chars]
	I1109 10:31:50.845820   29322 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1109 10:31:50.875244   29322 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1109 10:31:50.897126   29322 addons.go:488] enableAddons completed in 642.446218ms
	I1109 10:31:51.036513   29322 request.go:614] Waited for 196.563664ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.036576   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.036593   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:51.036640   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:51.036653   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:51.040446   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:51.040462   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:51.040474   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:51 GMT
	I1109 10:31:51.040481   29322 round_trippers.go:580]     Audit-Id: 79a8b74f-f251-41b9-a718-a682821837ad
	I1109 10:31:51.040489   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:51.040503   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:51.040512   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:51.040521   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:51.040600   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:51.040878   29322 pod_ready.go:92] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:51.040886   29322 pod_ready.go:81] duration metric: took 396.991895ms waiting for pod "coredns-565d847f94-fx6lt" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:51.040894   29322 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:51.238482   29322 request.go:614] Waited for 197.459325ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/etcd-multinode-102528
	I1109 10:31:51.238538   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/etcd-multinode-102528
	I1109 10:31:51.238548   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:51.238562   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:51.238572   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:51.242284   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:51.242300   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:51.242308   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:51 GMT
	I1109 10:31:51.242321   29322 round_trippers.go:580]     Audit-Id: 89518439-dcc9-46f6-b0dc-3c61e0d79185
	I1109 10:31:51.242328   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:51.242338   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:51.242344   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:51.242354   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:51.242575   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102528","namespace":"kube-system","uid":"5dde8340-2916-4da6-91aa-ea6dfe24a5ad","resourceVersion":"1041","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.mirror":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.seen":"2022-11-09T18:25:54.343403314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6046 chars]
	I1109 10:31:51.438416   29322 request.go:614] Waited for 195.470808ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.438533   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.438544   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:51.438556   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:51.438567   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:51.442319   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:51.442336   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:51.442344   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:51.442350   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:51.442356   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:51.442364   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:51.442370   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:51 GMT
	I1109 10:31:51.442377   29322 round_trippers.go:580]     Audit-Id: 7f83dae5-5ef3-43ed-ad86-30770d972412
	I1109 10:31:51.442476   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:51.442751   29322 pod_ready.go:92] pod "etcd-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:51.442769   29322 pod_ready.go:81] duration metric: took 401.864919ms waiting for pod "etcd-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:51.442798   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:51.636989   29322 request.go:614] Waited for 194.151085ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102528
	I1109 10:31:51.637089   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102528
	I1109 10:31:51.637100   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:51.637119   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:51.637134   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:51.641170   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:51.641185   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:51.641193   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:51 GMT
	I1109 10:31:51.641199   29322 round_trippers.go:580]     Audit-Id: 22fde405-018a-4404-bf12-233659e0904c
	I1109 10:31:51.641206   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:51.641213   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:51.641219   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:51.641225   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:51.641306   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-102528","namespace":"kube-system","uid":"f48fa313-e8ec-42bc-87bc-7daede794fe2","resourceVersion":"1029","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b110864cd1ed66678c31ad09d14c41ec","kubernetes.io/config.mirror":"b110864cd1ed66678c31ad09d14c41ec","kubernetes.io/config.seen":"2022-11-09T18:25:54.343403906Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8428 chars]
	I1109 10:31:51.838408   29322 request.go:614] Waited for 196.706833ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.838485   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.838498   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:51.838511   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:51.838524   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:51.842240   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:51.842256   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:51.842273   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:51.842282   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:51.842293   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:51.842302   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:51 GMT
	I1109 10:31:51.842310   29322 round_trippers.go:580]     Audit-Id: 6e4fa976-5104-4a97-921b-d75a8bede7fb
	I1109 10:31:51.842317   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:51.842575   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:51.842853   29322 pod_ready.go:92] pod "kube-apiserver-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:51.842859   29322 pod_ready.go:81] duration metric: took 400.063646ms waiting for pod "kube-apiserver-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:51.842867   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:52.036495   29322 request.go:614] Waited for 193.584888ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102528
	I1109 10:31:52.036557   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102528
	I1109 10:31:52.036601   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:52.036620   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:52.036633   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:52.039756   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:52.039772   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:52.039783   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:52.039794   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:52 GMT
	I1109 10:31:52.039801   29322 round_trippers.go:580]     Audit-Id: 1e8740bb-d312-4092-b67f-48cb6c686c8d
	I1109 10:31:52.039842   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:52.039850   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:52.039861   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:52.040044   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-102528","namespace":"kube-system","uid":"3dd056ba-22b5-4b0c-aa7e-9e00d215df9a","resourceVersion":"1035","creationTimestamp":"2022-11-09T18:25:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ec9e561364ffe02db1e38ab82ddc699b","kubernetes.io/config.mirror":"ec9e561364ffe02db1e38ab82ddc699b","kubernetes.io/config.seen":"2022-11-09T18:25:43.900701692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8005 chars]
	I1109 10:31:52.236382   29322 request.go:614] Waited for 195.974896ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:52.236433   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:52.236451   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:52.236503   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:52.236516   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:52.240104   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:52.240122   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:52.240131   29322 round_trippers.go:580]     Audit-Id: 67b77a0a-a747-4e73-9dd6-0ad109cdc4f6
	I1109 10:31:52.240138   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:52.240145   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:52.240151   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:52.240158   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:52.240165   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:52 GMT
	I1109 10:31:52.240259   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:52.240526   29322 pod_ready.go:92] pod "kube-controller-manager-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:52.240534   29322 pod_ready.go:81] duration metric: took 397.6731ms waiting for pod "kube-controller-manager-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:52.240543   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9wsxp" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:52.438109   29322 request.go:614] Waited for 197.482373ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-9wsxp
	I1109 10:31:52.438175   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-9wsxp
	I1109 10:31:52.438185   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:52.438201   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:52.438211   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:52.441973   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:52.441985   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:52.441992   29322 round_trippers.go:580]     Audit-Id: ff85bade-92d9-4986-b3b2-3b21f5d86198
	I1109 10:31:52.442020   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:52.442028   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:52.442033   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:52.442038   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:52.442042   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:52 GMT
	I1109 10:31:52.442096   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9wsxp","generateName":"kube-proxy-","namespace":"kube-system","uid":"03c6822b-9fef-4fa3-82a3-bb5082cf31b3","resourceVersion":"1023","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I1109 10:31:52.637720   29322 request.go:614] Waited for 195.332813ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:52.637787   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:52.637803   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:52.637818   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:52.637829   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:52.641380   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:52.641395   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:52.641403   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:52.641410   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:52.641416   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:52.641431   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:52.641440   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:52 GMT
	I1109 10:31:52.641446   29322 round_trippers.go:580]     Audit-Id: ee6ef8c5-ad9a-47cf-aebb-fa9813d6e71d
	I1109 10:31:52.641686   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:52.641999   29322 pod_ready.go:92] pod "kube-proxy-9wsxp" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:52.642009   29322 pod_ready.go:81] duration metric: took 401.471929ms waiting for pod "kube-proxy-9wsxp" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:52.642021   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4lh6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:52.837373   29322 request.go:614] Waited for 195.279986ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-c4lh6
	I1109 10:31:52.837462   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-c4lh6
	I1109 10:31:52.837474   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:52.837488   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:52.837499   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:52.841070   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:52.841085   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:52.841092   29322 round_trippers.go:580]     Audit-Id: f57c670c-d744-434a-903b-c333dc5033cf
	I1109 10:31:52.841099   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:52.841105   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:52.841111   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:52.841119   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:52.841125   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:52 GMT
	I1109 10:31:52.841683   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4lh6","generateName":"kube-proxy-","namespace":"kube-system","uid":"e9055586-6022-464a-acdd-6fce3c87392b","resourceVersion":"845","creationTimestamp":"2022-11-09T18:26:28Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1109 10:31:53.036399   29322 request.go:614] Waited for 194.425284ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m02
	I1109 10:31:53.036461   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m02
	I1109 10:31:53.036560   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.036577   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.036599   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.039879   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:53.039896   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.039902   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.039907   29322 round_trippers.go:580]     Audit-Id: 75eb063e-866b-4289-927e-2805251d8167
	I1109 10:31:53.039912   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.039920   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.039925   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.039930   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.039986   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528-m02","uid":"e1542fe1-dc88-406c-b080-a5120e5abea2","resourceVersion":"857","creationTimestamp":"2022-11-09T18:29:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:29:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:29:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4536 chars]
	I1109 10:31:53.040183   29322 pod_ready.go:92] pod "kube-proxy-c4lh6" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:53.040189   29322 pod_ready.go:81] duration metric: took 398.156397ms waiting for pod "kube-proxy-c4lh6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:53.040196   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kh6r6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:53.236349   29322 request.go:614] Waited for 196.117365ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-kh6r6
	I1109 10:31:53.236453   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-kh6r6
	I1109 10:31:53.236465   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.236477   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.236487   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.240993   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:53.241006   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.241012   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.241017   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.241022   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.241027   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.241032   29322 round_trippers.go:580]     Audit-Id: c451d501-c710-4a1c-82a5-e751549aa3c4
	I1109 10:31:53.241037   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.241109   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kh6r6","generateName":"kube-proxy-","namespace":"kube-system","uid":"de2bad4b-35b4-4537-a6a3-7acd77c63e69","resourceVersion":"925","creationTimestamp":"2022-11-09T18:27:09Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:27:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1109 10:31:53.436516   29322 request.go:614] Waited for 195.139966ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m03
	I1109 10:31:53.436623   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m03
	I1109 10:31:53.436634   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.436646   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.436659   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.440442   29322 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1109 10:31:53.440460   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.440468   29322 round_trippers.go:580]     Content-Length: 210
	I1109 10:31:53.440475   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.440481   29322 round_trippers.go:580]     Audit-Id: 13e74584-6407-493a-b6d5-7dbb73a4224a
	I1109 10:31:53.440487   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.440493   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.440500   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.440506   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.440523   29322 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-102528-m03\" not found","reason":"NotFound","details":{"name":"multinode-102528-m03","kind":"nodes"},"code":404}
	I1109 10:31:53.440589   29322 pod_ready.go:97] node "multinode-102528-m03" hosting pod "kube-proxy-kh6r6" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-102528-m03": nodes "multinode-102528-m03" not found
	I1109 10:31:53.440598   29322 pod_ready.go:81] duration metric: took 400.408173ms waiting for pod "kube-proxy-kh6r6" in "kube-system" namespace to be "Ready" ...
	E1109 10:31:53.440606   29322 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-102528-m03" hosting pod "kube-proxy-kh6r6" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-102528-m03": nodes "multinode-102528-m03" not found
	I1109 10:31:53.440612   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:53.638340   29322 request.go:614] Waited for 197.67651ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102528
	I1109 10:31:53.638432   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102528
	I1109 10:31:53.638466   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.638480   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.638494   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.643176   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:53.643190   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.643196   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.643201   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.643206   29322 round_trippers.go:580]     Audit-Id: e269443f-0ae7-44bf-9779-f4b86773c058
	I1109 10:31:53.643211   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.643215   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.643220   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.643270   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-102528","namespace":"kube-system","uid":"26dff845-4103-4884-86e3-42c37dc577c0","resourceVersion":"1014","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9865f6bce1997a307196ce89b4764fd5","kubernetes.io/config.mirror":"9865f6bce1997a307196ce89b4764fd5","kubernetes.io/config.seen":"2022-11-09T18:25:54.343402489Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1109 10:31:53.838352   29322 request.go:614] Waited for 194.794574ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:53.838482   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:53.838493   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.838504   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.838515   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.842400   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:53.842416   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.842423   29322 round_trippers.go:580]     Audit-Id: 4a52291a-71e6-45b4-b24d-e8723157a7af
	I1109 10:31:53.842430   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.842436   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.842441   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.842447   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.842453   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.842518   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:53.842785   29322 pod_ready.go:92] pod "kube-scheduler-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:53.842793   29322 pod_ready.go:81] duration metric: took 402.18488ms waiting for pod "kube-scheduler-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:53.842802   29322 pod_ready.go:38] duration metric: took 3.403025986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 10:31:53.842821   29322 api_server.go:51] waiting for apiserver process to appear ...
	I1109 10:31:53.842902   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:31:53.852004   29322 command_runner.go:130] > 1777
	I1109 10:31:53.852713   29322 api_server.go:71] duration metric: took 3.598134656s to wait for apiserver process to appear ...
	I1109 10:31:53.852722   29322 api_server.go:87] waiting for apiserver healthz status ...
	I1109 10:31:53.852730   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:53.857837   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 200:
	ok
	I1109 10:31:53.857869   29322 round_trippers.go:463] GET https://127.0.0.1:62610/version
	I1109 10:31:53.857874   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.857881   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.857887   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.858850   29322 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1109 10:31:53.858859   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.858865   29322 round_trippers.go:580]     Content-Length: 263
	I1109 10:31:53.858870   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.858875   29322 round_trippers.go:580]     Audit-Id: 9195da95-e731-482a-bd29-3de3e97404a6
	I1109 10:31:53.858880   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.858885   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.858890   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.858895   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.858904   29322 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1109 10:31:53.858925   29322 api_server.go:140] control plane version: v1.25.3
	I1109 10:31:53.858931   29322 api_server.go:130] duration metric: took 6.204895ms to wait for apiserver health ...
	I1109 10:31:53.858935   29322 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 10:31:54.036735   29322 request.go:614] Waited for 177.758991ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:54.036842   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:54.036854   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:54.036866   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:54.036876   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:54.043136   29322 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1109 10:31:54.043149   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:54.043155   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:54.043159   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:54.043164   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:54 GMT
	I1109 10:31:54.043168   29322 round_trippers.go:580]     Audit-Id: ca34093d-d9e3-43c3-bd98-3a95ddf67286
	I1109 10:31:54.043176   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:54.043185   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:54.044715   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1073","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85374 chars]
	I1109 10:31:54.046658   29322 system_pods.go:59] 12 kube-system pods found
	I1109 10:31:54.046668   29322 system_pods.go:61] "coredns-565d847f94-fx6lt" [680c8c15-39e0-4143-8dfd-30727e628800] Running
	I1109 10:31:54.046672   29322 system_pods.go:61] "etcd-multinode-102528" [5dde8340-2916-4da6-91aa-ea6dfe24a5ad] Running
	I1109 10:31:54.046677   29322 system_pods.go:61] "kindnet-6kjz8" [b34e8f27-542c-40de-80a7-cf1226429128] Running
	I1109 10:31:54.046680   29322 system_pods.go:61] "kindnet-9td8m" [bb563027-b991-4b95-921a-ee4687934118] Running
	I1109 10:31:54.046684   29322 system_pods.go:61] "kindnet-z66sn" [03cc3962-c1e0-444a-8743-743e707bf96d] Running
	I1109 10:31:54.046687   29322 system_pods.go:61] "kube-apiserver-multinode-102528" [f48fa313-e8ec-42bc-87bc-7daede794fe2] Running
	I1109 10:31:54.046692   29322 system_pods.go:61] "kube-controller-manager-multinode-102528" [3dd056ba-22b5-4b0c-aa7e-9e00d215df9a] Running
	I1109 10:31:54.046697   29322 system_pods.go:61] "kube-proxy-9wsxp" [03c6822b-9fef-4fa3-82a3-bb5082cf31b3] Running
	I1109 10:31:54.046701   29322 system_pods.go:61] "kube-proxy-c4lh6" [e9055586-6022-464a-acdd-6fce3c87392b] Running
	I1109 10:31:54.046705   29322 system_pods.go:61] "kube-proxy-kh6r6" [de2bad4b-35b4-4537-a6a3-7acd77c63e69] Running
	I1109 10:31:54.046709   29322 system_pods.go:61] "kube-scheduler-multinode-102528" [26dff845-4103-4884-86e3-42c37dc577c0] Running
	I1109 10:31:54.046727   29322 system_pods.go:61] "storage-provisioner" [5c5e247e-06db-434c-af4a-91a2c2a08779] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 10:31:54.046734   29322 system_pods.go:74] duration metric: took 187.799545ms to wait for pod list to return data ...
	I1109 10:31:54.046740   29322 default_sa.go:34] waiting for default service account to be created ...
	I1109 10:31:54.238332   29322 request.go:614] Waited for 191.522629ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/default/serviceaccounts
	I1109 10:31:54.238423   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/default/serviceaccounts
	I1109 10:31:54.238435   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:54.238448   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:54.238485   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:54.242261   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:54.242280   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:54.242287   29322 round_trippers.go:580]     Content-Length: 262
	I1109 10:31:54.242294   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:54 GMT
	I1109 10:31:54.242302   29322 round_trippers.go:580]     Audit-Id: bf1a0c7d-e17c-441d-92a8-ad49bd35de7f
	I1109 10:31:54.242308   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:54.242315   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:54.242322   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:54.242328   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:54.242344   29322 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1c004ba2-6981-48fa-895c-1cc8e56c3bb4","resourceVersion":"312","creationTimestamp":"2022-11-09T18:26:07Z"}}]}
	I1109 10:31:54.242504   29322 default_sa.go:45] found service account: "default"
	I1109 10:31:54.242513   29322 default_sa.go:55] duration metric: took 195.773608ms for default service account to be created ...
	I1109 10:31:54.242522   29322 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 10:31:54.437237   29322 request.go:614] Waited for 194.674489ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:54.437300   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:54.437311   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:54.437354   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:54.437369   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:54.442178   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:54.442193   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:54.442201   29322 round_trippers.go:580]     Audit-Id: a220ae7e-c910-4cc4-963f-ced211210750
	I1109 10:31:54.442209   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:54.442215   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:54.442219   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:54.442224   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:54.442230   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:54 GMT
	I1109 10:31:54.443399   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1073","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85374 chars]
	I1109 10:31:54.445340   29322 system_pods.go:86] 12 kube-system pods found
	I1109 10:31:54.445350   29322 system_pods.go:89] "coredns-565d847f94-fx6lt" [680c8c15-39e0-4143-8dfd-30727e628800] Running
	I1109 10:31:54.445355   29322 system_pods.go:89] "etcd-multinode-102528" [5dde8340-2916-4da6-91aa-ea6dfe24a5ad] Running
	I1109 10:31:54.445360   29322 system_pods.go:89] "kindnet-6kjz8" [b34e8f27-542c-40de-80a7-cf1226429128] Running
	I1109 10:31:54.445365   29322 system_pods.go:89] "kindnet-9td8m" [bb563027-b991-4b95-921a-ee4687934118] Running
	I1109 10:31:54.445370   29322 system_pods.go:89] "kindnet-z66sn" [03cc3962-c1e0-444a-8743-743e707bf96d] Running
	I1109 10:31:54.445374   29322 system_pods.go:89] "kube-apiserver-multinode-102528" [f48fa313-e8ec-42bc-87bc-7daede794fe2] Running
	I1109 10:31:54.445380   29322 system_pods.go:89] "kube-controller-manager-multinode-102528" [3dd056ba-22b5-4b0c-aa7e-9e00d215df9a] Running
	I1109 10:31:54.445384   29322 system_pods.go:89] "kube-proxy-9wsxp" [03c6822b-9fef-4fa3-82a3-bb5082cf31b3] Running
	I1109 10:31:54.445390   29322 system_pods.go:89] "kube-proxy-c4lh6" [e9055586-6022-464a-acdd-6fce3c87392b] Running
	I1109 10:31:54.445394   29322 system_pods.go:89] "kube-proxy-kh6r6" [de2bad4b-35b4-4537-a6a3-7acd77c63e69] Running
	I1109 10:31:54.445398   29322 system_pods.go:89] "kube-scheduler-multinode-102528" [26dff845-4103-4884-86e3-42c37dc577c0] Running
	I1109 10:31:54.445404   29322 system_pods.go:89] "storage-provisioner" [5c5e247e-06db-434c-af4a-91a2c2a08779] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 10:31:54.445409   29322 system_pods.go:126] duration metric: took 202.887433ms to wait for k8s-apps to be running ...
	I1109 10:31:54.445414   29322 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 10:31:54.445474   29322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 10:31:54.455206   29322 system_svc.go:56] duration metric: took 9.78964ms WaitForService to wait for kubelet.
	I1109 10:31:54.455218   29322 kubeadm.go:573] duration metric: took 4.200657014s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1109 10:31:54.455232   29322 node_conditions.go:102] verifying NodePressure condition ...
	I1109 10:31:54.638318   29322 request.go:614] Waited for 183.037703ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes
	I1109 10:31:54.638443   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes
	I1109 10:31:54.638453   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:54.638465   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:54.638475   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:54.642519   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:54.642535   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:54.642543   29322 round_trippers.go:580]     Audit-Id: 38c87ca6-24bd-4a35-bb99-82b11facd25b
	I1109 10:31:54.642556   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:54.642566   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:54.642572   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:54.642579   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:54.642585   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:54 GMT
	I1109 10:31:54.642682   29322 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 10903 chars]
	I1109 10:31:54.643044   29322 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:31:54.643052   29322 node_conditions.go:123] node cpu capacity is 6
	I1109 10:31:54.643059   29322 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:31:54.643062   29322 node_conditions.go:123] node cpu capacity is 6
	I1109 10:31:54.643066   29322 node_conditions.go:105] duration metric: took 187.835565ms to run NodePressure ...
	I1109 10:31:54.643074   29322 start.go:217] waiting for startup goroutines ...
	I1109 10:31:54.643565   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:31:54.643635   29322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/config.json ...
	I1109 10:31:54.721001   29322 out.go:177] * Starting worker node multinode-102528-m02 in cluster multinode-102528
	I1109 10:31:54.742800   29322 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 10:31:54.764850   29322 out.go:177] * Pulling base image ...
	I1109 10:31:54.807997   29322 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 10:31:54.808009   29322 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 10:31:54.808031   29322 cache.go:57] Caching tarball of preloaded images
	I1109 10:31:54.808224   29322 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 10:31:54.808245   29322 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1109 10:31:54.809047   29322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/config.json ...
	I1109 10:31:54.865581   29322 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 10:31:54.865596   29322 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 10:31:54.865607   29322 cache.go:208] Successfully downloaded all kic artifacts
	I1109 10:31:54.865649   29322 start.go:364] acquiring machines lock for multinode-102528-m02: {Name:mka0ddf96880a56e449afe60431280267c5ed209 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 10:31:54.865737   29322 start.go:368] acquired machines lock for "multinode-102528-m02" in 75.463µs
	I1109 10:31:54.865758   29322 start.go:96] Skipping create...Using existing machine configuration
	I1109 10:31:54.865764   29322 fix.go:55] fixHost starting: m02
	I1109 10:31:54.866043   29322 cli_runner.go:164] Run: docker container inspect multinode-102528-m02 --format={{.State.Status}}
	I1109 10:31:54.921930   29322 fix.go:103] recreateIfNeeded on multinode-102528-m02: state=Stopped err=<nil>
	W1109 10:31:54.921962   29322 fix.go:129] unexpected machine state, will restart: <nil>
	I1109 10:31:54.943659   29322 out.go:177] * Restarting existing docker container for "multinode-102528-m02" ...
	I1109 10:31:54.985920   29322 cli_runner.go:164] Run: docker start multinode-102528-m02
	I1109 10:31:55.316320   29322 cli_runner.go:164] Run: docker container inspect multinode-102528-m02 --format={{.State.Status}}
	I1109 10:31:55.375466   29322 kic.go:415] container "multinode-102528-m02" state is running.
	I1109 10:31:55.376050   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528-m02
	I1109 10:31:55.437402   29322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/config.json ...
	I1109 10:31:55.437859   29322 machine.go:88] provisioning docker machine ...
	I1109 10:31:55.437875   29322 ubuntu.go:169] provisioning hostname "multinode-102528-m02"
	I1109 10:31:55.437963   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:55.499733   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:31:55.499915   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62641 <nil> <nil>}
	I1109 10:31:55.499924   29322 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-102528-m02 && echo "multinode-102528-m02" | sudo tee /etc/hostname
	I1109 10:31:55.666249   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-102528-m02
	
	I1109 10:31:55.666361   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:55.724134   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:31:55.724308   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62641 <nil> <nil>}
	I1109 10:31:55.724320   29322 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-102528-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-102528-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-102528-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 10:31:55.840690   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:31:55.840708   29322 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 10:31:55.840721   29322 ubuntu.go:177] setting up certificates
	I1109 10:31:55.840728   29322 provision.go:83] configureAuth start
	I1109 10:31:55.840821   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528-m02
	I1109 10:31:55.900864   29322 provision.go:138] copyHostCerts
	I1109 10:31:55.900912   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:31:55.900977   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 10:31:55.900983   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:31:55.901078   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 10:31:55.901279   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:31:55.901322   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 10:31:55.901327   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:31:55.901402   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 10:31:55.901525   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:31:55.901566   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 10:31:55.901571   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:31:55.901634   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 10:31:55.901765   29322 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.multinode-102528-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-102528-m02]
	I1109 10:31:56.009229   29322 provision.go:172] copyRemoteCerts
	I1109 10:31:56.009294   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 10:31:56.009364   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:56.070255   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62641 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:31:56.182046   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 10:31:56.182154   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 10:31:56.204999   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 10:31:56.205087   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1109 10:31:56.221873   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 10:31:56.221957   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 10:31:56.241348   29322 provision.go:86] duration metric: configureAuth took 400.61816ms
	I1109 10:31:56.241361   29322 ubuntu.go:193] setting minikube options for container-runtime
	I1109 10:31:56.241565   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:31:56.241649   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:56.299022   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:31:56.299190   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62641 <nil> <nil>}
	I1109 10:31:56.299203   29322 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 10:31:56.415521   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 10:31:56.415532   29322 ubuntu.go:71] root file system type: overlay
	I1109 10:31:56.415705   29322 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 10:31:56.415795   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:56.476931   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:31:56.477098   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62641 <nil> <nil>}
	I1109 10:31:56.477157   29322 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 10:31:56.604577   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 10:31:56.604697   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:56.661599   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:31:56.661759   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62641 <nil> <nil>}
	I1109 10:31:56.661772   29322 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 10:31:56.781229   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:31:56.781244   29322 machine.go:91] provisioned docker machine in 1.343412808s
	I1109 10:31:56.781252   29322 start.go:300] post-start starting for "multinode-102528-m02" (driver="docker")
	I1109 10:31:56.781257   29322 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 10:31:56.781337   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 10:31:56.781405   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:56.839805   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62641 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:31:56.926223   29322 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 10:31:56.929787   29322 command_runner.go:130] > NAME="Ubuntu"
	I1109 10:31:56.929798   29322 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1109 10:31:56.929802   29322 command_runner.go:130] > ID=ubuntu
	I1109 10:31:56.929808   29322 command_runner.go:130] > ID_LIKE=debian
	I1109 10:31:56.929814   29322 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1109 10:31:56.929819   29322 command_runner.go:130] > VERSION_ID="20.04"
	I1109 10:31:56.929826   29322 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1109 10:31:56.929831   29322 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1109 10:31:56.929835   29322 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1109 10:31:56.929847   29322 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1109 10:31:56.929852   29322 command_runner.go:130] > VERSION_CODENAME=focal
	I1109 10:31:56.929856   29322 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1109 10:31:56.929913   29322 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 10:31:56.929926   29322 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 10:31:56.929933   29322 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 10:31:56.929944   29322 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 10:31:56.929951   29322 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 10:31:56.930051   29322 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 10:31:56.930235   29322 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 10:31:56.930241   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /etc/ssl/certs/228682.pem
	I1109 10:31:56.930460   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 10:31:56.937868   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:31:56.954936   29322 start.go:303] post-start completed in 173.679621ms
	I1109 10:31:56.955023   29322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 10:31:56.955087   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:57.012790   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62641 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:31:57.093835   29322 command_runner.go:130] > 6%
	I1109 10:31:57.093921   29322 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 10:31:57.098212   29322 command_runner.go:130] > 99G
	I1109 10:31:57.098556   29322 fix.go:57] fixHost completed within 2.232848247s
	I1109 10:31:57.098568   29322 start.go:83] releasing machines lock for "multinode-102528-m02", held for 2.232882802s
	I1109 10:31:57.098659   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528-m02
	I1109 10:31:57.175556   29322 out.go:177] * Found network options:
	I1109 10:31:57.197555   29322 out.go:177]   - NO_PROXY=192.168.58.2
	W1109 10:31:57.219434   29322 proxy.go:119] fail to check proxy env: Error ip not in block
	W1109 10:31:57.219524   29322 proxy.go:119] fail to check proxy env: Error ip not in block
	I1109 10:31:57.219744   29322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 10:31:57.219745   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1109 10:31:57.219878   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:57.219880   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:57.280827   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62641 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:31:57.281500   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62641 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:31:57.454045   29322 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1109 10:31:57.454098   29322 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1109 10:31:57.467851   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:31:57.550777   29322 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1109 10:31:57.641854   29322 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 10:31:57.652490   29322 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1109 10:31:57.653618   29322 command_runner.go:130] > [Unit]
	I1109 10:31:57.653629   29322 command_runner.go:130] > Description=Docker Application Container Engine
	I1109 10:31:57.653635   29322 command_runner.go:130] > Documentation=https://docs.docker.com
	I1109 10:31:57.653641   29322 command_runner.go:130] > BindsTo=containerd.service
	I1109 10:31:57.653649   29322 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1109 10:31:57.653680   29322 command_runner.go:130] > Wants=network-online.target
	I1109 10:31:57.653702   29322 command_runner.go:130] > Requires=docker.socket
	I1109 10:31:57.653707   29322 command_runner.go:130] > StartLimitBurst=3
	I1109 10:31:57.653711   29322 command_runner.go:130] > StartLimitIntervalSec=60
	I1109 10:31:57.653714   29322 command_runner.go:130] > [Service]
	I1109 10:31:57.653718   29322 command_runner.go:130] > Type=notify
	I1109 10:31:57.653722   29322 command_runner.go:130] > Restart=on-failure
	I1109 10:31:57.653725   29322 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1109 10:31:57.653732   29322 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1109 10:31:57.653743   29322 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1109 10:31:57.653749   29322 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1109 10:31:57.653754   29322 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1109 10:31:57.653760   29322 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1109 10:31:57.653766   29322 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1109 10:31:57.653771   29322 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1109 10:31:57.653785   29322 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1109 10:31:57.653791   29322 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1109 10:31:57.653795   29322 command_runner.go:130] > ExecStart=
	I1109 10:31:57.653808   29322 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1109 10:31:57.653814   29322 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1109 10:31:57.653819   29322 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1109 10:31:57.653825   29322 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1109 10:31:57.653831   29322 command_runner.go:130] > LimitNOFILE=infinity
	I1109 10:31:57.653837   29322 command_runner.go:130] > LimitNPROC=infinity
	I1109 10:31:57.653840   29322 command_runner.go:130] > LimitCORE=infinity
	I1109 10:31:57.653847   29322 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1109 10:31:57.653852   29322 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1109 10:31:57.653855   29322 command_runner.go:130] > TasksMax=infinity
	I1109 10:31:57.653859   29322 command_runner.go:130] > TimeoutStartSec=0
	I1109 10:31:57.653865   29322 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1109 10:31:57.653868   29322 command_runner.go:130] > Delegate=yes
	I1109 10:31:57.653877   29322 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1109 10:31:57.653881   29322 command_runner.go:130] > KillMode=process
	I1109 10:31:57.653884   29322 command_runner.go:130] > [Install]
	I1109 10:31:57.653888   29322 command_runner.go:130] > WantedBy=multi-user.target
	I1109 10:31:57.654028   29322 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 10:31:57.654094   29322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 10:31:57.664105   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 10:31:57.675998   29322 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1109 10:31:57.676009   29322 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1109 10:31:57.676873   29322 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 10:31:57.746356   29322 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 10:31:57.820129   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:31:57.899433   29322 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 10:31:58.129538   29322 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 10:31:58.204483   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:31:58.283728   29322 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1109 10:31:58.293459   29322 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 10:31:58.293546   29322 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 10:31:58.297330   29322 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1109 10:31:58.297346   29322 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1109 10:31:58.297358   29322 command_runner.go:130] > Device: 100036h/1048630d	Inode: 131         Links: 1
	I1109 10:31:58.297365   29322 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1109 10:31:58.297376   29322 command_runner.go:130] > Access: 2022-11-09 18:31:57.625836678 +0000
	I1109 10:31:58.297381   29322 command_runner.go:130] > Modify: 2022-11-09 18:31:57.569836675 +0000
	I1109 10:31:58.297386   29322 command_runner.go:130] > Change: 2022-11-09 18:31:57.575836675 +0000
	I1109 10:31:58.297392   29322 command_runner.go:130] >  Birth: -
	I1109 10:31:58.297612   29322 start.go:472] Will wait 60s for crictl version
	I1109 10:31:58.297680   29322 ssh_runner.go:195] Run: sudo crictl version
	I1109 10:31:58.325345   29322 command_runner.go:130] > Version:  0.1.0
	I1109 10:31:58.325357   29322 command_runner.go:130] > RuntimeName:  docker
	I1109 10:31:58.325361   29322 command_runner.go:130] > RuntimeVersion:  20.10.20
	I1109 10:31:58.325365   29322 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1109 10:31:58.327432   29322 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1109 10:31:58.327530   29322 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:31:58.353552   29322 command_runner.go:130] > 20.10.20
	I1109 10:31:58.355839   29322 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:31:58.381394   29322 command_runner.go:130] > 20.10.20
	I1109 10:31:58.425287   29322 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1109 10:31:58.447513   29322 out.go:177]   - env NO_PROXY=192.168.58.2
	I1109 10:31:58.468786   29322 cli_runner.go:164] Run: docker exec -t multinode-102528-m02 dig +short host.docker.internal
	I1109 10:31:58.590491   29322 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 10:31:58.590597   29322 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 10:31:58.594960   29322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:31:58.604810   29322 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528 for IP: 192.168.58.3
	I1109 10:31:58.604952   29322 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 10:31:58.605020   29322 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 10:31:58.605028   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 10:31:58.605053   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 10:31:58.605082   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 10:31:58.605104   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 10:31:58.605210   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 10:31:58.605262   29322 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 10:31:58.605274   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 10:31:58.605310   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 10:31:58.605353   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 10:31:58.605408   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 10:31:58.605493   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:31:58.605533   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /usr/share/ca-certificates/228682.pem
	I1109 10:31:58.605562   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:31:58.605584   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem -> /usr/share/ca-certificates/22868.pem
	I1109 10:31:58.605912   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 10:31:58.623459   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 10:31:58.640694   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 10:31:58.658011   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 10:31:58.675170   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 10:31:58.691755   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 10:31:58.709019   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 10:31:58.726119   29322 ssh_runner.go:195] Run: openssl version
	I1109 10:31:58.731233   29322 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1109 10:31:58.731602   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 10:31:58.739416   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 10:31:58.743288   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:31:58.743374   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:31:58.743433   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 10:31:58.748523   29322 command_runner.go:130] > 3ec20f2e
	I1109 10:31:58.748823   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 10:31:58.756013   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 10:31:58.764233   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:31:58.768192   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:31:58.768257   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:31:58.768331   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:31:58.773370   29322 command_runner.go:130] > b5213941
	I1109 10:31:58.773760   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 10:31:58.781192   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 10:31:58.789075   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 10:31:58.792964   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:31:58.793048   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:31:58.793097   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 10:31:58.798215   29322 command_runner.go:130] > 51391683
	I1109 10:31:58.798551   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 10:31:58.806577   29322 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 10:31:58.870083   29322 command_runner.go:130] > systemd
	I1109 10:31:58.872749   29322 cni.go:95] Creating CNI manager for ""
	I1109 10:31:58.872761   29322 cni.go:156] 2 nodes found, recommending kindnet
	I1109 10:31:58.872776   29322 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 10:31:58.872789   29322 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-102528 NodeName:multinode-102528-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 10:31:58.872877   29322 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-102528-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 10:31:58.872941   29322 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-102528-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 10:31:58.873019   29322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1109 10:31:58.880021   29322 command_runner.go:130] > kubeadm
	I1109 10:31:58.880031   29322 command_runner.go:130] > kubectl
	I1109 10:31:58.880037   29322 command_runner.go:130] > kubelet
	I1109 10:31:58.880789   29322 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 10:31:58.880846   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1109 10:31:58.887904   29322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I1109 10:31:58.900445   29322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 10:31:58.915415   29322 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1109 10:31:58.919528   29322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:31:58.929108   29322 host.go:66] Checking if "multinode-102528" exists ...
	I1109 10:31:58.929324   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:31:58.929320   29322 start.go:286] JoinCluster: &{Name:multinode-102528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:31:58.929403   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1109 10:31:58.929471   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:31:58.986769   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:31:59.123589   29322 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 
	I1109 10:31:59.128019   29322 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:31:59.128051   29322 host.go:66] Checking if "multinode-102528" exists ...
	I1109 10:31:59.128292   29322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-102528-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1109 10:31:59.128367   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:31:59.187027   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:31:59.326359   29322 command_runner.go:130] > node/multinode-102528-m02 cordoned
	I1109 10:32:02.344214   29322 command_runner.go:130] > pod "busybox-65db55d5d6-qdqrp" has DeletionTimestamp older than 1 seconds, skipping
	I1109 10:32:02.344229   29322 command_runner.go:130] > node/multinode-102528-m02 drained
	I1109 10:32:02.347277   29322 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1109 10:32:02.347296   29322 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-6kjz8, kube-system/kube-proxy-c4lh6
	I1109 10:32:02.347319   29322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-102528-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.219088637s)
	I1109 10:32:02.347331   29322 node.go:109] successfully drained node "m02"
	I1109 10:32:02.347678   29322 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:32:02.347880   29322 kapi.go:59] client config for multinode-102528: &rest.Config{Host:"https://127.0.0.1:62610", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:32:02.348140   29322 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1109 10:32:02.348177   29322 round_trippers.go:463] DELETE https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m02
	I1109 10:32:02.348182   29322 round_trippers.go:469] Request Headers:
	I1109 10:32:02.348190   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:32:02.348195   29322 round_trippers.go:473]     Content-Type: application/json
	I1109 10:32:02.348200   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:32:02.351423   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:32:02.351434   29322 round_trippers.go:577] Response Headers:
	I1109 10:32:02.351440   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:32:02.351457   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:32:02.351466   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:32:02.351471   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:32:02.351476   29322 round_trippers.go:580]     Content-Length: 171
	I1109 10:32:02.351481   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:32:02 GMT
	I1109 10:32:02.351487   29322 round_trippers.go:580]     Audit-Id: 3c5902c1-7c65-45c9-aa26-c152b6722404
	I1109 10:32:02.351503   29322 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-102528-m02","kind":"nodes","uid":"e1542fe1-dc88-406c-b080-a5120e5abea2"}}
	I1109 10:32:02.351534   29322 node.go:125] successfully deleted node "m02"
	I1109 10:32:02.351547   29322 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:32:02.351561   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:32:02.351583   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:32:02.387862   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:32:02.507179   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:32:02.507205   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 10:32:02.525754   29322 command_runner.go:130] ! W1109 18:32:02.398731    1115 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:32:02.525769   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:32:02.525777   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:32:02.525785   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:32:02.525791   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:32:02.525798   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:32:02.525808   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:32:02.525814   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1109 10:32:02.525853   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:02.398731    1115 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:02.525861   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:32:02.525869   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:32:02.562594   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:32:02.562610   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:02.562631   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:02.562654   29322 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:02.398731    1115 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:13.609726   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:32:13.609795   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:32:13.647190   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:32:13.746055   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:32:13.746074   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 10:32:13.762908   29322 command_runner.go:130] ! W1109 18:32:13.666500    1750 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:32:13.762922   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:32:13.762932   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:32:13.762937   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:32:13.762946   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:32:13.762952   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:32:13.762962   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:32:13.762967   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1109 10:32:13.762997   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:13.666500    1750 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:13.763008   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:32:13.763015   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:32:13.800788   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:32:13.800816   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:13.800831   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:13.800846   29322 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:13.666500    1750 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:35.408030   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:32:35.408073   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:32:35.444558   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:32:35.543964   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:32:35.543985   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 10:32:35.562373   29322 command_runner.go:130] ! W1109 18:32:35.457675    1988 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:32:35.562388   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:32:35.562398   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:32:35.562403   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:32:35.562408   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:32:35.562413   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:32:35.562423   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:32:35.562429   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1109 10:32:35.562461   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:35.457675    1988 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:35.562473   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:32:35.562482   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:32:35.601057   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:32:35.601078   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:35.601103   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:35.601115   29322 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:35.457675    1988 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:01.803577   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:33:01.803708   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:33:01.839599   29322 command_runner.go:130] ! W1109 18:33:01.850092    2260 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:33:01.839616   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:33:01.863051   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:33:01.869890   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:33:01.930313   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:33:01.930326   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:33:01.955206   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:33:01.955219   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:01.958721   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:33:01.958736   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:33:01.958743   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1109 10:33:01.958769   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:33:01.850092    2260 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:01.958778   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:33:01.958785   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:33:02.001666   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:33:02.001679   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:02.001697   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:02.001708   29322 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:33:01.850092    2260 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:33.648961   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:33:33.649059   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:33:33.686879   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:33:33.790062   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:33:33.790076   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 10:33:33.808514   29322 command_runner.go:130] ! W1109 18:33:33.698660    2589 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:33:33.808529   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:33:33.808542   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:33:33.808549   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:33:33.808554   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:33:33.808562   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:33:33.808573   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:33:33.808580   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1109 10:33:33.808610   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:33:33.698660    2589 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:33.808619   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:33:33.808626   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:33:33.847640   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:33:33.847659   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:33.847674   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:33.847685   29322 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:33:33.698660    2589 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:34:20.658411   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:34:20.658560   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:34:20.695495   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:34:20.794031   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:34:20.794057   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 10:34:20.813075   29322 command_runner.go:130] ! W1109 18:34:20.697530    3029 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:34:20.813097   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:34:20.813112   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:34:20.813119   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:34:20.813124   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:34:20.813131   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:34:20.813142   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:34:20.813149   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1109 10:34:20.813182   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:34:20.697530    3029 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:34:20.813190   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:34:20.813197   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:34:20.850146   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:34:20.850163   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:34:20.850183   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:34:20.850201   29322 start.go:288] JoinCluster complete in 2m21.924612469s
	I1109 10:34:20.872332   29322 out.go:177] 
	W1109 10:34:20.909138   29322 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:34:20.697530    3029 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:34:20.697530    3029 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 10:34:20.909166   29322 out.go:239] * 
	* 
	W1109 10:34:20.910481   29322 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 10:34:21.024791   29322 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:354: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-102528 --wait=true -v=8 --alsologtostderr --driver=docker " : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-102528
helpers_test.go:235: (dbg) docker inspect multinode-102528:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "119f7013aec3d3f68586d0a60b3a0efcbd71ef25a6ff72f109e5edcc67d01f06",
	        "Created": "2022-11-09T18:25:35.156254459Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 101235,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T18:30:49.801182329Z",
	            "FinishedAt": "2022-11-09T18:30:35.699375671Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/119f7013aec3d3f68586d0a60b3a0efcbd71ef25a6ff72f109e5edcc67d01f06/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/119f7013aec3d3f68586d0a60b3a0efcbd71ef25a6ff72f109e5edcc67d01f06/hostname",
	        "HostsPath": "/var/lib/docker/containers/119f7013aec3d3f68586d0a60b3a0efcbd71ef25a6ff72f109e5edcc67d01f06/hosts",
	        "LogPath": "/var/lib/docker/containers/119f7013aec3d3f68586d0a60b3a0efcbd71ef25a6ff72f109e5edcc67d01f06/119f7013aec3d3f68586d0a60b3a0efcbd71ef25a6ff72f109e5edcc67d01f06-json.log",
	        "Name": "/multinode-102528",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "multinode-102528:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "multinode-102528",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0fac085823b28411e9d8f13d0cf19ea03b2daad3445680dce4a77cb0a92a952e-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0fac085823b28411e9d8f13d0cf19ea03b2daad3445680dce4a77cb0a92a952e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0fac085823b28411e9d8f13d0cf19ea03b2daad3445680dce4a77cb0a92a952e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0fac085823b28411e9d8f13d0cf19ea03b2daad3445680dce4a77cb0a92a952e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "multinode-102528",
	                "Source": "/var/lib/docker/volumes/multinode-102528/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-102528",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-102528",
	                "name.minikube.sigs.k8s.io": "multinode-102528",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e26463068c6a518749db81ca2e51891825db101cfc9fd8fbf09b84b93e82cdb4",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62611"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62607"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62608"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62609"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "62610"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/e26463068c6a",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-102528": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.58.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "119f7013aec3",
	                        "multinode-102528"
	                    ],
	                    "NetworkID": "52b845372784f8f3eba7e0512708526b79db4e14447c6e536b5e84398e99ee94",
	                    "EndpointID": "3272cdb9e42dbf94b29902f0e24c5a9f3adbf45597ffff0a872ff8b7c637815c",
	                    "Gateway": "192.168.58.1",
	                    "IPAddress": "192.168.58.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:3a:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-102528 -n multinode-102528
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-102528 logs -n 25: (3.383359879s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-102528 cp multinode-102528-m02:/home/docker/cp-test.txt                                                           | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528:/home/docker/cp-test_multinode-102528-m02_multinode-102528.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-102528 ssh -n                                                                                                     | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-102528 ssh -n multinode-102528 sudo cat                                                                           | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | /home/docker/cp-test_multinode-102528-m02_multinode-102528.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-102528 cp multinode-102528-m02:/home/docker/cp-test.txt                                                           | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528-m03:/home/docker/cp-test_multinode-102528-m02_multinode-102528-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-102528 ssh -n                                                                                                     | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-102528 ssh -n multinode-102528-m03 sudo cat                                                                       | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | /home/docker/cp-test_multinode-102528-m02_multinode-102528-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-102528 cp testdata/cp-test.txt                                                                                    | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-102528 ssh -n                                                                                                     | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-102528 cp multinode-102528-m03:/home/docker/cp-test.txt                                                           | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3385420501/001/cp-test_multinode-102528-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-102528 ssh -n                                                                                                     | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-102528 cp multinode-102528-m03:/home/docker/cp-test.txt                                                           | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528:/home/docker/cp-test_multinode-102528-m03_multinode-102528.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-102528 ssh -n                                                                                                     | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-102528 ssh -n multinode-102528 sudo cat                                                                           | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | /home/docker/cp-test_multinode-102528-m03_multinode-102528.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-102528 cp multinode-102528-m03:/home/docker/cp-test.txt                                                           | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528-m02:/home/docker/cp-test_multinode-102528-m03_multinode-102528-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-102528 ssh -n                                                                                                     | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | multinode-102528-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-102528 ssh -n multinode-102528-m02 sudo cat                                                                       | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	|         | /home/docker/cp-test_multinode-102528-m03_multinode-102528-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-102528 node stop m03                                                                                              | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:27 PST |
	| node    | multinode-102528 node start                                                                                                 | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:27 PST | 09 Nov 22 10:28 PST |
	|         | m03 --alsologtostderr                                                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-102528                                                                                                    | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:28 PST |                     |
	| stop    | -p multinode-102528                                                                                                         | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:28 PST | 09 Nov 22 10:28 PST |
	| start   | -p multinode-102528                                                                                                         | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:28 PST | 09 Nov 22 10:30 PST |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-102528                                                                                                    | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:30 PST |                     |
	| node    | multinode-102528 node delete                                                                                                | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:30 PST | 09 Nov 22 10:30 PST |
	|         | m03                                                                                                                         |                  |         |         |                     |                     |
	| stop    | multinode-102528 stop                                                                                                       | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:30 PST | 09 Nov 22 10:30 PST |
	| start   | -p multinode-102528                                                                                                         | multinode-102528 | jenkins | v1.28.0 | 09 Nov 22 10:30 PST |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	|         | --driver=docker                                                                                                             |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/09 10:30:48
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 10:30:48.536912   29322 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:30:48.537183   29322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:30:48.537188   29322 out.go:309] Setting ErrFile to fd 2...
	I1109 10:30:48.537192   29322 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:30:48.537317   29322 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:30:48.537818   29322 out.go:303] Setting JSON to false
	I1109 10:30:48.556746   29322 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":12623,"bootTime":1668006025,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 10:30:48.556849   29322 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 10:30:48.578543   29322 out.go:177] * [multinode-102528] minikube v1.28.0 on Darwin 13.0
	I1109 10:30:48.622116   29322 notify.go:220] Checking for updates...
	I1109 10:30:48.644206   29322 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 10:30:48.666203   29322 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:30:48.688126   29322 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 10:30:48.710406   29322 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 10:30:48.732385   29322 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 10:30:48.754842   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:30:48.755501   29322 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 10:30:48.823234   29322 docker.go:137] docker version: linux-20.10.20
	I1109 10:30:48.823401   29322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:30:48.963279   29322 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-09 18:30:48.873497036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:30:49.006881   29322 out.go:177] * Using the docker driver based on existing profile
	I1109 10:30:49.028713   29322 start.go:282] selected driver: docker
	I1109 10:30:49.028740   29322 start.go:808] validating driver "docker" against &{Name:multinode-102528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] API
ServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securit
y-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:30:49.028965   29322 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 10:30:49.029221   29322 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:30:49.170716   29322 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:50 OomKillDisable:false NGoroutines:47 SystemTime:2022-11-09 18:30:49.082217702 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:30:49.173190   29322 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 10:30:49.173219   29322 cni.go:95] Creating CNI manager for ""
	I1109 10:30:49.173226   29322 cni.go:156] 2 nodes found, recommending kindnet
	I1109 10:30:49.173246   29322 start_flags.go:317] config:
	{Name:multinode-102528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkP
lugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-cr
eds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:30:49.216988   29322 out.go:177] * Starting control plane node multinode-102528 in cluster multinode-102528
	I1109 10:30:49.239627   29322 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 10:30:49.260796   29322 out.go:177] * Pulling base image ...
	I1109 10:30:49.302794   29322 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 10:30:49.302849   29322 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 10:30:49.302891   29322 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1109 10:30:49.302912   29322 cache.go:57] Caching tarball of preloaded images
	I1109 10:30:49.303179   29322 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 10:30:49.303197   29322 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1109 10:30:49.304196   29322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/config.json ...
	I1109 10:30:49.360300   29322 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 10:30:49.360318   29322 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 10:30:49.360327   29322 cache.go:208] Successfully downloaded all kic artifacts
	I1109 10:30:49.360391   29322 start.go:364] acquiring machines lock for multinode-102528: {Name:mk70f613f7d58abdd1a6ac3ac877e9dff914f556 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 10:30:49.360512   29322 start.go:368] acquired machines lock for "multinode-102528" in 100.317µs
	I1109 10:30:49.360540   29322 start.go:96] Skipping create...Using existing machine configuration
	I1109 10:30:49.360552   29322 fix.go:55] fixHost starting: 
	I1109 10:30:49.360816   29322 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:30:49.417463   29322 fix.go:103] recreateIfNeeded on multinode-102528: state=Stopped err=<nil>
	W1109 10:30:49.417502   29322 fix.go:129] unexpected machine state, will restart: <nil>
	I1109 10:30:49.461188   29322 out.go:177] * Restarting existing docker container for "multinode-102528" ...
	I1109 10:30:49.482191   29322 cli_runner.go:164] Run: docker start multinode-102528
	I1109 10:30:49.807720   29322 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:30:49.865305   29322 kic.go:415] container "multinode-102528" state is running.
	I1109 10:30:49.865878   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528
	I1109 10:30:49.925450   29322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/config.json ...
	I1109 10:30:49.925849   29322 machine.go:88] provisioning docker machine ...
	I1109 10:30:49.925874   29322 ubuntu.go:169] provisioning hostname "multinode-102528"
	I1109 10:30:49.925958   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:49.985024   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:30:49.985247   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1109 10:30:49.985264   29322 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-102528 && echo "multinode-102528" | sudo tee /etc/hostname
	I1109 10:30:50.117994   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-102528
	
	I1109 10:30:50.118091   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:50.178996   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:30:50.179161   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1109 10:30:50.179173   29322 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-102528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-102528/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-102528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 10:30:50.292940   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:30:50.292966   29322 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 10:30:50.292984   29322 ubuntu.go:177] setting up certificates
	I1109 10:30:50.292994   29322 provision.go:83] configureAuth start
	I1109 10:30:50.293104   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528
	I1109 10:30:50.350556   29322 provision.go:138] copyHostCerts
	I1109 10:30:50.350615   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:30:50.350692   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 10:30:50.350701   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:30:50.350805   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 10:30:50.350994   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:30:50.351037   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 10:30:50.351050   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:30:50.351117   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 10:30:50.351241   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:30:50.351279   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 10:30:50.351284   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:30:50.351352   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 10:30:50.351484   29322 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.multinode-102528 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube multinode-102528]
	I1109 10:30:50.446600   29322 provision.go:172] copyRemoteCerts
	I1109 10:30:50.446689   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 10:30:50.446755   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:50.503602   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:30:50.588602   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 10:30:50.588707   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 10:30:50.605496   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 10:30:50.605594   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1109 10:30:50.622903   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 10:30:50.623010   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 10:30:50.641623   29322 provision.go:86] duration metric: configureAuth took 348.61787ms
	I1109 10:30:50.641638   29322 ubuntu.go:193] setting minikube options for container-runtime
	I1109 10:30:50.641832   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:30:50.641917   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:50.700165   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:30:50.700354   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1109 10:30:50.700368   29322 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 10:30:50.815507   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 10:30:50.815526   29322 ubuntu.go:71] root file system type: overlay
	I1109 10:30:50.815705   29322 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 10:30:50.815814   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:50.874717   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:30:50.874871   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1109 10:30:50.874921   29322 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 10:30:51.002525   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 10:30:51.002640   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:51.059906   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:30:51.060083   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62611 <nil> <nil>}
	I1109 10:30:51.060097   29322 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 10:30:51.184147   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:30:51.184171   29322 machine.go:91] provisioned docker machine in 1.258338301s
	I1109 10:30:51.184181   29322 start.go:300] post-start starting for "multinode-102528" (driver="docker")
	I1109 10:30:51.184187   29322 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 10:30:51.184256   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 10:30:51.184316   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:51.239599   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:30:51.327949   29322 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 10:30:51.331498   29322 command_runner.go:130] > NAME="Ubuntu"
	I1109 10:30:51.331509   29322 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1109 10:30:51.331513   29322 command_runner.go:130] > ID=ubuntu
	I1109 10:30:51.331520   29322 command_runner.go:130] > ID_LIKE=debian
	I1109 10:30:51.331527   29322 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1109 10:30:51.331531   29322 command_runner.go:130] > VERSION_ID="20.04"
	I1109 10:30:51.331546   29322 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1109 10:30:51.331551   29322 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1109 10:30:51.331555   29322 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1109 10:30:51.331565   29322 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1109 10:30:51.331570   29322 command_runner.go:130] > VERSION_CODENAME=focal
	I1109 10:30:51.331575   29322 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1109 10:30:51.331805   29322 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 10:30:51.331820   29322 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 10:30:51.331827   29322 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 10:30:51.331832   29322 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 10:30:51.331841   29322 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 10:30:51.331947   29322 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 10:30:51.332131   29322 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 10:30:51.332137   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /etc/ssl/certs/228682.pem
	I1109 10:30:51.332341   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 10:30:51.339472   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:30:51.357529   29322 start.go:303] post-start completed in 173.343278ms
	I1109 10:30:51.357615   29322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 10:30:51.357681   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:51.413026   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:30:51.502361   29322 command_runner.go:130] > 6%!
	(MISSING)I1109 10:30:51.502444   29322 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 10:30:51.506640   29322 command_runner.go:130] > 99G
	I1109 10:30:51.507026   29322 fix.go:57] fixHost completed within 2.14652871s
	I1109 10:30:51.507037   29322 start.go:83] releasing machines lock for "multinode-102528", held for 2.146573291s
	I1109 10:30:51.507144   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528
	I1109 10:30:51.564543   29322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 10:30:51.564549   29322 ssh_runner.go:195] Run: systemctl --version
	I1109 10:30:51.564626   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:51.564630   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:51.623344   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:30:51.624355   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:30:51.763408   29322 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1109 10:30:51.763503   29322 command_runner.go:130] > systemd 245 (245.4-4ubuntu3.18)
	I1109 10:30:51.763529   29322 command_runner.go:130] > +PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid
	I1109 10:30:51.763680   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1109 10:30:51.771170   29322 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1109 10:30:51.783663   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:30:51.848068   29322 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1109 10:30:51.930568   29322 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 10:30:51.939822   29322 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1109 10:30:51.940003   29322 command_runner.go:130] > [Unit]
	I1109 10:30:51.940013   29322 command_runner.go:130] > Description=Docker Application Container Engine
	I1109 10:30:51.940018   29322 command_runner.go:130] > Documentation=https://docs.docker.com
	I1109 10:30:51.940022   29322 command_runner.go:130] > BindsTo=containerd.service
	I1109 10:30:51.940027   29322 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1109 10:30:51.940031   29322 command_runner.go:130] > Wants=network-online.target
	I1109 10:30:51.940035   29322 command_runner.go:130] > Requires=docker.socket
	I1109 10:30:51.940039   29322 command_runner.go:130] > StartLimitBurst=3
	I1109 10:30:51.940043   29322 command_runner.go:130] > StartLimitIntervalSec=60
	I1109 10:30:51.940077   29322 command_runner.go:130] > [Service]
	I1109 10:30:51.940087   29322 command_runner.go:130] > Type=notify
	I1109 10:30:51.940091   29322 command_runner.go:130] > Restart=on-failure
	I1109 10:30:51.940097   29322 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1109 10:30:51.940103   29322 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1109 10:30:51.940109   29322 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1109 10:30:51.940115   29322 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1109 10:30:51.940120   29322 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1109 10:30:51.940130   29322 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1109 10:30:51.940137   29322 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1109 10:30:51.940160   29322 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1109 10:30:51.940167   29322 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1109 10:30:51.940170   29322 command_runner.go:130] > ExecStart=
	I1109 10:30:51.940182   29322 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1109 10:30:51.940187   29322 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1109 10:30:51.940192   29322 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1109 10:30:51.940198   29322 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1109 10:30:51.940201   29322 command_runner.go:130] > LimitNOFILE=infinity
	I1109 10:30:51.940205   29322 command_runner.go:130] > LimitNPROC=infinity
	I1109 10:30:51.940213   29322 command_runner.go:130] > LimitCORE=infinity
	I1109 10:30:51.940219   29322 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1109 10:30:51.940223   29322 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1109 10:30:51.940226   29322 command_runner.go:130] > TasksMax=infinity
	I1109 10:30:51.940230   29322 command_runner.go:130] > TimeoutStartSec=0
	I1109 10:30:51.940236   29322 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1109 10:30:51.940239   29322 command_runner.go:130] > Delegate=yes
	I1109 10:30:51.940245   29322 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1109 10:30:51.940249   29322 command_runner.go:130] > KillMode=process
	I1109 10:30:51.940256   29322 command_runner.go:130] > [Install]
	I1109 10:30:51.940260   29322 command_runner.go:130] > WantedBy=multi-user.target
	I1109 10:30:51.940720   29322 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 10:30:51.940788   29322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 10:30:51.950281   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 10:30:51.962058   29322 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1109 10:30:51.962069   29322 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1109 10:30:51.963145   29322 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 10:30:52.027652   29322 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 10:30:52.094006   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:30:52.162177   29322 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 10:30:52.421239   29322 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 10:30:52.485419   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:30:52.553168   29322 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1109 10:30:52.562393   29322 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 10:30:52.562477   29322 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 10:30:52.566121   29322 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1109 10:30:52.566130   29322 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1109 10:30:52.566135   29322 command_runner.go:130] > Device: 97h/151d	Inode: 118         Links: 1
	I1109 10:30:52.566140   29322 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1109 10:30:52.566148   29322 command_runner.go:130] > Access: 2022-11-09 18:30:51.859134305 +0000
	I1109 10:30:52.566158   29322 command_runner.go:130] > Modify: 2022-11-09 18:30:51.859134305 +0000
	I1109 10:30:52.566165   29322 command_runner.go:130] > Change: 2022-11-09 18:30:51.860134306 +0000
	I1109 10:30:52.566169   29322 command_runner.go:130] >  Birth: -
	I1109 10:30:52.566251   29322 start.go:472] Will wait 60s for crictl version
	I1109 10:30:52.566293   29322 ssh_runner.go:195] Run: sudo crictl version
	I1109 10:30:52.593401   29322 command_runner.go:130] > Version:  0.1.0
	I1109 10:30:52.593412   29322 command_runner.go:130] > RuntimeName:  docker
	I1109 10:30:52.593416   29322 command_runner.go:130] > RuntimeVersion:  20.10.20
	I1109 10:30:52.593420   29322 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1109 10:30:52.595533   29322 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1109 10:30:52.595625   29322 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:30:52.622062   29322 command_runner.go:130] > 20.10.20
	I1109 10:30:52.624554   29322 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:30:52.649810   29322 command_runner.go:130] > 20.10.20
	I1109 10:30:52.698010   29322 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1109 10:30:52.698245   29322 cli_runner.go:164] Run: docker exec -t multinode-102528 dig +short host.docker.internal
	I1109 10:30:52.811910   29322 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 10:30:52.812039   29322 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 10:30:52.816280   29322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:30:52.826102   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:52.883457   29322 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 10:30:52.883562   29322 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 10:30:52.905966   29322 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1109 10:30:52.905982   29322 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1109 10:30:52.905987   29322 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1109 10:30:52.905993   29322 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1109 10:30:52.905999   29322 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1109 10:30:52.906003   29322 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1109 10:30:52.906007   29322 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1109 10:30:52.906021   29322 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1109 10:30:52.906026   29322 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1109 10:30:52.906030   29322 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 10:30:52.906033   29322 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1109 10:30:52.908131   29322 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1109 10:30:52.908148   29322 docker.go:543] Images already preloaded, skipping extraction
	I1109 10:30:52.908275   29322 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 10:30:52.928705   29322 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.25.3
	I1109 10:30:52.928717   29322 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.25.3
	I1109 10:30:52.928721   29322 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.25.3
	I1109 10:30:52.928725   29322 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.25.3
	I1109 10:30:52.928729   29322 command_runner.go:130] > kindest/kindnetd:v20221004-44d545d1
	I1109 10:30:52.928734   29322 command_runner.go:130] > registry.k8s.io/pause:3.8
	I1109 10:30:52.928739   29322 command_runner.go:130] > registry.k8s.io/etcd:3.5.4-0
	I1109 10:30:52.928746   29322 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.9.3
	I1109 10:30:52.928751   29322 command_runner.go:130] > k8s.gcr.io/pause:3.6
	I1109 10:30:52.928763   29322 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 10:30:52.928771   29322 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I1109 10:30:52.931544   29322 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	kindest/kindnetd:v20221004-44d545d1
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I1109 10:30:52.931565   29322 cache_images.go:84] Images are preloaded, skipping loading
	I1109 10:30:52.931666   29322 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 10:30:52.996570   29322 command_runner.go:130] > systemd
	I1109 10:30:52.999107   29322 cni.go:95] Creating CNI manager for ""
	I1109 10:30:52.999123   29322 cni.go:156] 2 nodes found, recommending kindnet
	I1109 10:30:52.999142   29322 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 10:30:52.999158   29322 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-102528 NodeName:multinode-102528 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 10:30:52.999268   29322 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-102528"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 10:30:52.999350   29322 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-102528 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 10:30:52.999422   29322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1109 10:30:53.006296   29322 command_runner.go:130] > kubeadm
	I1109 10:30:53.006305   29322 command_runner.go:130] > kubectl
	I1109 10:30:53.006308   29322 command_runner.go:130] > kubelet
	I1109 10:30:53.007208   29322 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 10:30:53.007270   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 10:30:53.014293   29322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (478 bytes)
	I1109 10:30:53.026823   29322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 10:30:53.039878   29322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2038 bytes)
	I1109 10:30:53.052761   29322 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1109 10:30:53.056565   29322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:30:53.065978   29322 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528 for IP: 192.168.58.2
	I1109 10:30:53.066104   29322 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 10:30:53.066172   29322 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 10:30:53.066273   29322 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.key
	I1109 10:30:53.066347   29322 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/apiserver.key.cee25041
	I1109 10:30:53.066409   29322 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/proxy-client.key
	I1109 10:30:53.066418   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1109 10:30:53.066454   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1109 10:30:53.066482   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1109 10:30:53.066503   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1109 10:30:53.066525   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 10:30:53.066546   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 10:30:53.066565   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 10:30:53.066587   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 10:30:53.066693   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 10:30:53.066738   29322 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 10:30:53.066750   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 10:30:53.066785   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 10:30:53.066820   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 10:30:53.066852   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 10:30:53.066929   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:30:53.066959   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:30:53.066985   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem -> /usr/share/ca-certificates/22868.pem
	I1109 10:30:53.067007   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /usr/share/ca-certificates/228682.pem
	I1109 10:30:53.067498   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 10:30:53.084738   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 10:30:53.101565   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 10:30:53.118587   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 10:30:53.135860   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 10:30:53.152588   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 10:30:53.169226   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 10:30:53.185584   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 10:30:53.202725   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 10:30:53.219684   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 10:30:53.237422   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 10:30:53.253645   29322 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 10:30:53.265820   29322 ssh_runner.go:195] Run: openssl version
	I1109 10:30:53.270891   29322 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1109 10:30:53.271122   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 10:30:53.279178   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 10:30:53.283223   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:30:53.283402   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:30:53.283450   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 10:30:53.288370   29322 command_runner.go:130] > 3ec20f2e
	I1109 10:30:53.288756   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 10:30:53.295935   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 10:30:53.303922   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:30:53.307692   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:30:53.307798   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:30:53.307852   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:30:53.312739   29322 command_runner.go:130] > b5213941
	I1109 10:30:53.313074   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 10:30:53.320042   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 10:30:53.327852   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 10:30:53.331477   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:30:53.331623   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:30:53.331673   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 10:30:53.336701   29322 command_runner.go:130] > 51391683
	I1109 10:30:53.337061   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 10:30:53.344474   29322 kubeadm.go:396] StartCluster: {Name:multinode-102528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false porta
iner:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:30:53.344606   29322 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 10:30:53.366508   29322 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 10:30:53.373544   29322 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1109 10:30:53.373554   29322 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1109 10:30:53.373558   29322 command_runner.go:130] > /var/lib/minikube/etcd:
	I1109 10:30:53.373562   29322 command_runner.go:130] > member
	I1109 10:30:53.374358   29322 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1109 10:30:53.374369   29322 kubeadm.go:627] restartCluster start
	I1109 10:30:53.374423   29322 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 10:30:53.381136   29322 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:53.381225   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:30:53.437830   29322 kubeconfig.go:135] verify returned: extract IP: "multinode-102528" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:30:53.437916   29322 kubeconfig.go:146] "multinode-102528" context is missing from /Users/jenkins/minikube-integration/15331-22028/kubeconfig - will repair!
	I1109 10:30:53.438155   29322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/kubeconfig: {Name:mk02bb1c68cad934afd737965b2dbda8f5a4ba2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:30:53.438588   29322 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:30:53.438795   29322 kapi.go:59] client config for multinode-102528: &rest.Config{Host:"https://127.0.0.1:62610", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:30:53.439169   29322 cert_rotation.go:137] Starting client certificate rotation controller
	I1109 10:30:53.439356   29322 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 10:30:53.447263   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:53.447332   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:53.455459   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:53.657595   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:53.657758   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:53.668765   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:53.857565   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:53.857774   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:53.869120   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:54.057149   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:54.057276   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:54.068168   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:54.257584   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:54.257771   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:54.268450   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:54.457613   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:54.457773   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:54.469267   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:54.657565   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:54.657723   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:54.668532   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:54.857536   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:54.857689   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:54.868882   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:55.057557   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:55.057740   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:55.068709   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:55.255816   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:55.255953   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:55.266592   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:55.457523   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:55.457729   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:55.468103   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:55.657517   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:55.657728   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:55.668440   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:55.857500   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:55.857659   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:55.869310   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.057496   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:56.057690   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:56.068480   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.257530   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:56.257710   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:56.268284   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.457521   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:56.457707   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:56.468336   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.468346   29322 api_server.go:165] Checking apiserver status ...
	I1109 10:30:56.468400   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 10:30:56.476845   29322 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.476863   29322 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1109 10:30:56.476880   29322 kubeadm.go:1114] stopping kube-system containers ...
	I1109 10:30:56.476962   29322 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 10:30:56.499057   29322 command_runner.go:130] > 87217a284b95
	I1109 10:30:56.499068   29322 command_runner.go:130] > f24399907a45
	I1109 10:30:56.499076   29322 command_runner.go:130] > acd607123986
	I1109 10:30:56.499080   29322 command_runner.go:130] > 246636dd97e8
	I1109 10:30:56.499084   29322 command_runner.go:130] > 744e86ae21f6
	I1109 10:30:56.499089   29322 command_runner.go:130] > a72eb1f58fc3
	I1109 10:30:56.499092   29322 command_runner.go:130] > 1e9e9464a654
	I1109 10:30:56.499097   29322 command_runner.go:130] > 706558a4ed10
	I1109 10:30:56.499100   29322 command_runner.go:130] > 28b3a05115ad
	I1109 10:30:56.499104   29322 command_runner.go:130] > 78e4ea2c8ae0
	I1109 10:30:56.499108   29322 command_runner.go:130] > 652c7e303fdd
	I1109 10:30:56.499111   29322 command_runner.go:130] > 4e785d9e3405
	I1109 10:30:56.499116   29322 command_runner.go:130] > b1b331d84fd3
	I1109 10:30:56.499119   29322 command_runner.go:130] > 8b8ad03da153
	I1109 10:30:56.499122   29322 command_runner.go:130] > f969ced4e9d4
	I1109 10:30:56.499126   29322 command_runner.go:130] > efc1daab7958
	I1109 10:30:56.499130   29322 command_runner.go:130] > a0c4641044c8
	I1109 10:30:56.499133   29322 command_runner.go:130] > 7272fd486970
	I1109 10:30:56.499136   29322 command_runner.go:130] > 08723ade2218
	I1109 10:30:56.499141   29322 command_runner.go:130] > 23a2523fd3db
	I1109 10:30:56.499144   29322 command_runner.go:130] > 52deb537c4a0
	I1109 10:30:56.499155   29322 command_runner.go:130] > bac09f656d79
	I1109 10:30:56.499159   29322 command_runner.go:130] > 23053176a325
	I1109 10:30:56.499162   29322 command_runner.go:130] > ae39c6ec78b2
	I1109 10:30:56.499165   29322 command_runner.go:130] > 451b1fa8d38e
	I1109 10:30:56.499169   29322 command_runner.go:130] > 7ae33b58e2a6
	I1109 10:30:56.499172   29322 command_runner.go:130] > c1448cffd21f
	I1109 10:30:56.499176   29322 command_runner.go:130] > 7acd1c43832d
	I1109 10:30:56.499180   29322 command_runner.go:130] > 91faabc25d49
	I1109 10:30:56.499184   29322 command_runner.go:130] > 7d98acbd674e
	I1109 10:30:56.499187   29322 command_runner.go:130] > 5d9e6129376f
	I1109 10:30:56.499191   29322 command_runner.go:130] > 9a033e5f8d9b
	I1109 10:30:56.501398   29322 docker.go:444] Stopping containers: [87217a284b95 f24399907a45 acd607123986 246636dd97e8 744e86ae21f6 a72eb1f58fc3 1e9e9464a654 706558a4ed10 28b3a05115ad 78e4ea2c8ae0 652c7e303fdd 4e785d9e3405 b1b331d84fd3 8b8ad03da153 f969ced4e9d4 efc1daab7958 a0c4641044c8 7272fd486970 08723ade2218 23a2523fd3db 52deb537c4a0 bac09f656d79 23053176a325 ae39c6ec78b2 451b1fa8d38e 7ae33b58e2a6 c1448cffd21f 7acd1c43832d 91faabc25d49 7d98acbd674e 5d9e6129376f 9a033e5f8d9b]
	I1109 10:30:56.501499   29322 ssh_runner.go:195] Run: docker stop 87217a284b95 f24399907a45 acd607123986 246636dd97e8 744e86ae21f6 a72eb1f58fc3 1e9e9464a654 706558a4ed10 28b3a05115ad 78e4ea2c8ae0 652c7e303fdd 4e785d9e3405 b1b331d84fd3 8b8ad03da153 f969ced4e9d4 efc1daab7958 a0c4641044c8 7272fd486970 08723ade2218 23a2523fd3db 52deb537c4a0 bac09f656d79 23053176a325 ae39c6ec78b2 451b1fa8d38e 7ae33b58e2a6 c1448cffd21f 7acd1c43832d 91faabc25d49 7d98acbd674e 5d9e6129376f 9a033e5f8d9b
	I1109 10:30:56.526736   29322 command_runner.go:130] > 87217a284b95
	I1109 10:30:56.526853   29322 command_runner.go:130] > f24399907a45
	I1109 10:30:56.526861   29322 command_runner.go:130] > acd607123986
	I1109 10:30:56.526865   29322 command_runner.go:130] > 246636dd97e8
	I1109 10:30:56.526875   29322 command_runner.go:130] > 744e86ae21f6
	I1109 10:30:56.526879   29322 command_runner.go:130] > a72eb1f58fc3
	I1109 10:30:56.526884   29322 command_runner.go:130] > 1e9e9464a654
	I1109 10:30:56.527234   29322 command_runner.go:130] > 706558a4ed10
	I1109 10:30:56.527240   29322 command_runner.go:130] > 28b3a05115ad
	I1109 10:30:56.527249   29322 command_runner.go:130] > 78e4ea2c8ae0
	I1109 10:30:56.527252   29322 command_runner.go:130] > 652c7e303fdd
	I1109 10:30:56.527255   29322 command_runner.go:130] > 4e785d9e3405
	I1109 10:30:56.527259   29322 command_runner.go:130] > b1b331d84fd3
	I1109 10:30:56.527646   29322 command_runner.go:130] > 8b8ad03da153
	I1109 10:30:56.527654   29322 command_runner.go:130] > f969ced4e9d4
	I1109 10:30:56.527660   29322 command_runner.go:130] > efc1daab7958
	I1109 10:30:56.527688   29322 command_runner.go:130] > a0c4641044c8
	I1109 10:30:56.527696   29322 command_runner.go:130] > 7272fd486970
	I1109 10:30:56.527700   29322 command_runner.go:130] > 08723ade2218
	I1109 10:30:56.527711   29322 command_runner.go:130] > 23a2523fd3db
	I1109 10:30:56.527718   29322 command_runner.go:130] > 52deb537c4a0
	I1109 10:30:56.527722   29322 command_runner.go:130] > bac09f656d79
	I1109 10:30:56.527731   29322 command_runner.go:130] > 23053176a325
	I1109 10:30:56.527735   29322 command_runner.go:130] > ae39c6ec78b2
	I1109 10:30:56.527738   29322 command_runner.go:130] > 451b1fa8d38e
	I1109 10:30:56.527742   29322 command_runner.go:130] > 7ae33b58e2a6
	I1109 10:30:56.527745   29322 command_runner.go:130] > c1448cffd21f
	I1109 10:30:56.527749   29322 command_runner.go:130] > 7acd1c43832d
	I1109 10:30:56.527752   29322 command_runner.go:130] > 91faabc25d49
	I1109 10:30:56.527756   29322 command_runner.go:130] > 7d98acbd674e
	I1109 10:30:56.527759   29322 command_runner.go:130] > 5d9e6129376f
	I1109 10:30:56.527763   29322 command_runner.go:130] > 9a033e5f8d9b
	I1109 10:30:56.530164   29322 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1109 10:30:56.540298   29322 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 10:30:56.548234   29322 command_runner.go:130] > -rw------- 1 root root 5639 Nov  9 18:25 /etc/kubernetes/admin.conf
	I1109 10:30:56.548245   29322 command_runner.go:130] > -rw------- 1 root root 5656 Nov  9 18:28 /etc/kubernetes/controller-manager.conf
	I1109 10:30:56.548251   29322 command_runner.go:130] > -rw------- 1 root root 2003 Nov  9 18:25 /etc/kubernetes/kubelet.conf
	I1109 10:30:56.548258   29322 command_runner.go:130] > -rw------- 1 root root 5600 Nov  9 18:28 /etc/kubernetes/scheduler.conf
	I1109 10:30:56.548268   29322 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  9 18:25 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Nov  9 18:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2003 Nov  9 18:25 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Nov  9 18:28 /etc/kubernetes/scheduler.conf
	
	I1109 10:30:56.548324   29322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 10:30:56.555474   29322 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1109 10:30:56.556287   29322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 10:30:56.563281   29322 command_runner.go:130] >     server: https://control-plane.minikube.internal:8443
	I1109 10:30:56.564170   29322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 10:30:56.571047   29322 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.571107   29322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 10:30:56.578579   29322 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 10:30:56.585908   29322 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:30:56.585969   29322 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 10:30:56.592750   29322 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 10:30:56.599818   29322 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1109 10:30:56.599828   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:30:56.641004   29322 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 10:30:56.641099   29322 command_runner.go:130] > [certs] Using existing ca certificate authority
	I1109 10:30:56.641340   29322 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I1109 10:30:56.641600   29322 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1109 10:30:56.641790   29322 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I1109 10:30:56.642178   29322 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I1109 10:30:56.642469   29322 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I1109 10:30:56.642615   29322 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I1109 10:30:56.643001   29322 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I1109 10:30:56.643190   29322 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1109 10:30:56.643371   29322 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1109 10:30:56.643514   29322 command_runner.go:130] > [certs] Using the existing "sa" key
	I1109 10:30:56.646486   29322 command_runner.go:130] ! W1109 18:30:56.643462    1200 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:30:56.646503   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:30:56.688102   29322 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 10:30:56.958325   29322 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I1109 10:30:57.071158   29322 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I1109 10:30:57.585147   29322 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 10:30:57.725789   29322 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 10:30:57.730695   29322 command_runner.go:130] ! W1109 18:30:56.690457    1210 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:30:57.730717   29322 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.084226348s)
	I1109 10:30:57.730735   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:30:57.782863   29322 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 10:30:57.783496   29322 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 10:30:57.783652   29322 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I1109 10:30:57.857477   29322 command_runner.go:130] ! W1109 18:30:57.776830    1232 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:30:57.857501   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:30:57.898838   29322 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 10:30:57.898862   29322 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 10:30:57.901424   29322 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 10:30:57.902007   29322 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 10:30:57.905954   29322 command_runner.go:130] ! W1109 18:30:57.902040    1266 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:30:57.905978   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:30:57.963979   29322 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 10:30:57.969707   29322 command_runner.go:130] ! W1109 18:30:57.966647    1279 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:30:57.969737   29322 api_server.go:51] waiting for apiserver process to appear ...
	I1109 10:30:57.969848   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:30:58.525853   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:30:59.026493   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:30:59.525289   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:31:00.027290   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:31:00.036231   29322 command_runner.go:130] > 1777
	I1109 10:31:00.037099   29322 api_server.go:71] duration metric: took 2.067416783s to wait for apiserver process to appear ...
	I1109 10:31:00.037110   29322 api_server.go:87] waiting for apiserver healthz status ...
	I1109 10:31:00.037123   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:05.037242   29322 api_server.go:268] stopped: https://127.0.0.1:62610/healthz: Get "https://127.0.0.1:62610/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 10:31:05.537352   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:07.877017   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 10:31:07.877032   29322 api_server.go:102] status: https://127.0.0.1:62610/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 10:31:08.037267   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:08.044879   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 10:31:08.044899   29322 api_server.go:102] status: https://127.0.0.1:62610/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:31:08.538623   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:08.545590   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 10:31:08.560546   29322 api_server.go:102] status: https://127.0.0.1:62610/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:31:09.037234   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:09.043931   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 10:31:09.043950   29322 api_server.go:102] status: https://127.0.0.1:62610/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:31:09.537526   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:09.543733   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 200:
	ok
	I1109 10:31:09.543794   29322 round_trippers.go:463] GET https://127.0.0.1:62610/version
	I1109 10:31:09.543804   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:09.543814   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:09.543821   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:09.550517   29322 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1109 10:31:09.550529   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:09.550536   29322 round_trippers.go:580]     Audit-Id: 34c33ead-36cb-43db-afd0-3df0bf4099db
	I1109 10:31:09.550542   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:09.550546   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:09.550551   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:09.550556   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:09.550561   29322 round_trippers.go:580]     Content-Length: 263
	I1109 10:31:09.550565   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:09 GMT
	I1109 10:31:09.550584   29322 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1109 10:31:09.550636   29322 api_server.go:140] control plane version: v1.25.3
	I1109 10:31:09.550644   29322 api_server.go:130] duration metric: took 9.513780893s to wait for apiserver health ...
	I1109 10:31:09.550651   29322 cni.go:95] Creating CNI manager for ""
	I1109 10:31:09.550657   29322 cni.go:156] 2 nodes found, recommending kindnet
	I1109 10:31:09.589534   29322 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1109 10:31:09.626427   29322 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1109 10:31:09.634186   29322 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I1109 10:31:09.634204   29322 command_runner.go:130] >   Size: 2828728   	Blocks: 5528       IO Block: 4096   regular file
	I1109 10:31:09.634209   29322 command_runner.go:130] > Device: 8fh/143d	Inode: 2102734     Links: 1
	I1109 10:31:09.634247   29322 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I1109 10:31:09.634260   29322 command_runner.go:130] > Access: 2022-05-18 18:39:21.000000000 +0000
	I1109 10:31:09.634265   29322 command_runner.go:130] > Modify: 2022-05-18 18:39:21.000000000 +0000
	I1109 10:31:09.634269   29322 command_runner.go:130] > Change: 2022-11-09 18:03:43.031940595 +0000
	I1109 10:31:09.634272   29322 command_runner.go:130] >  Birth: -
	I1109 10:31:09.634321   29322 cni.go:189] applying CNI manifest using /var/lib/minikube/binaries/v1.25.3/kubectl ...
	I1109 10:31:09.634327   29322 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2429 bytes)
	I1109 10:31:09.651799   29322 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.25.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1109 10:31:10.341562   29322 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I1109 10:31:10.343883   29322 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I1109 10:31:10.345044   29322 command_runner.go:130] > serviceaccount/kindnet unchanged
	I1109 10:31:10.353898   29322 command_runner.go:130] > daemonset.apps/kindnet configured
	I1109 10:31:10.360090   29322 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 10:31:10.360161   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:10.360170   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.360177   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.360183   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.363951   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:10.363971   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.363986   29322 round_trippers.go:580]     Audit-Id: b33087c4-84f3-4d4c-ac3c-4b7b24f702c3
	I1109 10:31:10.363996   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.364005   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.364012   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.364018   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.364024   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.365203   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"990"},"items":[{"metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85432 chars]
	I1109 10:31:10.368224   29322 system_pods.go:59] 12 kube-system pods found
	I1109 10:31:10.368243   29322 system_pods.go:61] "coredns-565d847f94-fx6lt" [680c8c15-39e0-4143-8dfd-30727e628800] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 10:31:10.368248   29322 system_pods.go:61] "etcd-multinode-102528" [5dde8340-2916-4da6-91aa-ea6dfe24a5ad] Running
	I1109 10:31:10.368252   29322 system_pods.go:61] "kindnet-6kjz8" [b34e8f27-542c-40de-80a7-cf1226429128] Running
	I1109 10:31:10.368256   29322 system_pods.go:61] "kindnet-9td8m" [bb563027-b991-4b95-921a-ee4687934118] Running
	I1109 10:31:10.368259   29322 system_pods.go:61] "kindnet-z66sn" [03cc3962-c1e0-444a-8743-743e707bf96d] Running
	I1109 10:31:10.368264   29322 system_pods.go:61] "kube-apiserver-multinode-102528" [f48fa313-e8ec-42bc-87bc-7daede794fe2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 10:31:10.368270   29322 system_pods.go:61] "kube-controller-manager-multinode-102528" [3dd056ba-22b5-4b0c-aa7e-9e00d215df9a] Running
	I1109 10:31:10.368275   29322 system_pods.go:61] "kube-proxy-9wsxp" [03c6822b-9fef-4fa3-82a3-bb5082cf31b3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1109 10:31:10.368278   29322 system_pods.go:61] "kube-proxy-c4lh6" [e9055586-6022-464a-acdd-6fce3c87392b] Running
	I1109 10:31:10.368282   29322 system_pods.go:61] "kube-proxy-kh6r6" [de2bad4b-35b4-4537-a6a3-7acd77c63e69] Running
	I1109 10:31:10.368286   29322 system_pods.go:61] "kube-scheduler-multinode-102528" [26dff845-4103-4884-86e3-42c37dc577c0] Running
	I1109 10:31:10.368292   29322 system_pods.go:61] "storage-provisioner" [5c5e247e-06db-434c-af4a-91a2c2a08779] Running
	I1109 10:31:10.368296   29322 system_pods.go:74] duration metric: took 8.196308ms to wait for pod list to return data ...
	I1109 10:31:10.368303   29322 node_conditions.go:102] verifying NodePressure condition ...
	I1109 10:31:10.368337   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes
	I1109 10:31:10.368342   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.368349   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.368355   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.371241   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:10.371252   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.371258   29322 round_trippers.go:580]     Audit-Id: 9d2bee65-a8b0-4c5c-9d33-2e0f112cfe85
	I1109 10:31:10.371274   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.371282   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.371287   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.371295   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.371300   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.371379   29322 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"990"},"items":[{"metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 10902 chars]
	I1109 10:31:10.371833   29322 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:31:10.371844   29322 node_conditions.go:123] node cpu capacity is 6
	I1109 10:31:10.371855   29322 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:31:10.371858   29322 node_conditions.go:123] node cpu capacity is 6
	I1109 10:31:10.371876   29322 node_conditions.go:105] duration metric: took 3.56331ms to run NodePressure ...
	I1109 10:31:10.371890   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:31:10.478743   29322 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I1109 10:31:10.516073   29322 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I1109 10:31:10.519464   29322 command_runner.go:130] ! W1109 18:31:10.446328    2887 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:31:10.519485   29322 kubeadm.go:763] waiting for restarted kubelet to initialise ...
	I1109 10:31:10.519541   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I1109 10:31:10.519546   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.519552   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.519558   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.523051   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:10.523061   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.523067   29322 round_trippers.go:580]     Audit-Id: be75f9b2-da9d-4c2a-bb4d-8708055cafab
	I1109 10:31:10.523072   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.523080   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.523087   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.523092   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.523097   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.523284   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"993"},"items":[{"metadata":{"name":"etcd-multinode-102528","namespace":"kube-system","uid":"5dde8340-2916-4da6-91aa-ea6dfe24a5ad","resourceVersion":"760","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.mirror":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.seen":"2022-11-09T18:25:54.343403314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations"
:{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:kub [truncated 30634 chars]
	I1109 10:31:10.524047   29322 kubeadm.go:778] kubelet initialised
	I1109 10:31:10.524056   29322 kubeadm.go:779] duration metric: took 4.560079ms waiting for restarted kubelet to initialise ...
	I1109 10:31:10.524062   29322 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 10:31:10.524096   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:10.524101   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.524108   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.524114   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.528593   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:10.528605   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.528611   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.528615   29322 round_trippers.go:580]     Audit-Id: 1ddadfb1-bb80-443f-86a9-c09f461ccebb
	I1109 10:31:10.528620   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.528627   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.528631   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.528637   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.530236   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"993"},"items":[{"metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 85432 chars]
	I1109 10:31:10.532148   29322 pod_ready.go:78] waiting up to 4m0s for pod "coredns-565d847f94-fx6lt" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:10.532191   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:10.532197   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.532203   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.532209   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.534114   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:10.534124   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.534132   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.534139   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.534144   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.534149   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.534153   29322 round_trippers.go:580]     Audit-Id: 0ef76549-eef7-46b4-9eae-80919bc16550
	I1109 10:31:10.534165   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.534429   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:10.534715   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:10.534722   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:10.534728   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:10.534733   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:10.536900   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:10.536909   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:10.536914   29322 round_trippers.go:580]     Audit-Id: 56354366-fc07-4398-9895-9d000dba0270
	I1109 10:31:10.536922   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:10.536932   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:10.536937   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:10.536942   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:10.536947   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:10 GMT
	I1109 10:31:10.536997   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:11.038168   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:11.038189   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:11.038201   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:11.038211   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:11.041651   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:11.041673   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:11.041683   29322 round_trippers.go:580]     Audit-Id: d0c00871-52c3-4e1d-af98-0213bfdebca8
	I1109 10:31:11.041713   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:11.041729   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:11.041739   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:11.041749   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:11.041768   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:11 GMT
	I1109 10:31:11.042016   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:11.042303   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:11.042309   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:11.042315   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:11.042321   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:11.044351   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:11.044360   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:11.044366   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:11.044371   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:11 GMT
	I1109 10:31:11.044376   29322 round_trippers.go:580]     Audit-Id: 15db7ac2-66ab-4151-9b88-b8154b6e6005
	I1109 10:31:11.044381   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:11.044385   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:11.044390   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:11.044436   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:11.539063   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:11.539084   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:11.539096   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:11.539106   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:11.542945   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:11.542960   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:11.542967   29322 round_trippers.go:580]     Audit-Id: 89c67b66-1bba-40ff-bc44-ccbad45cde76
	I1109 10:31:11.542974   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:11.542981   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:11.542987   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:11.542997   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:11.543003   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:11 GMT
	I1109 10:31:11.543103   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:11.543398   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:11.543404   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:11.543412   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:11.543418   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:11.545265   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:11.545274   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:11.545280   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:11.545285   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:11.545293   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:11.545297   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:11.545303   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:11 GMT
	I1109 10:31:11.545310   29322 round_trippers.go:580]     Audit-Id: 6ec58151-372f-4f6b-85f3-ed59100c8fe0
	I1109 10:31:11.545685   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:12.037449   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:12.037475   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:12.037490   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:12.037502   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:12.040764   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:12.040774   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:12.040780   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:12.040784   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:12.040791   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:12 GMT
	I1109 10:31:12.040797   29322 round_trippers.go:580]     Audit-Id: f55229c7-5505-41f3-adbb-59f88235ba56
	I1109 10:31:12.040802   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:12.040833   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:12.040998   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:12.041284   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:12.041290   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:12.041296   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:12.041302   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:12.043117   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:12.043126   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:12.043132   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:12.043137   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:12.043142   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:12 GMT
	I1109 10:31:12.043163   29322 round_trippers.go:580]     Audit-Id: 506e55cb-4e1a-4ef5-a772-acfa8e24556e
	I1109 10:31:12.043176   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:12.043183   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:12.043235   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:12.537958   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:12.537981   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:12.537994   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:12.538004   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:12.541582   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:12.541597   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:12.541604   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:12.541610   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:12.541617   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:12 GMT
	I1109 10:31:12.541624   29322 round_trippers.go:580]     Audit-Id: d6be5be2-15d6-4b0a-895a-fd6796e8ab86
	I1109 10:31:12.541631   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:12.541637   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:12.541854   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:12.542232   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:12.542239   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:12.542245   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:12.542250   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:12.544130   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:12.544143   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:12.544149   29322 round_trippers.go:580]     Audit-Id: d478f378-638b-411e-a2d0-bb9fc87f2236
	I1109 10:31:12.544154   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:12.544159   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:12.544166   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:12.544171   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:12.544176   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:12 GMT
	I1109 10:31:12.544221   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:12.544410   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:13.037700   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:13.037719   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:13.037732   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:13.037742   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:13.041492   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:13.041507   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:13.041515   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:13.041521   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:13.041528   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:13.041534   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:13.041540   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:13 GMT
	I1109 10:31:13.041548   29322 round_trippers.go:580]     Audit-Id: fd31528d-08a4-4c10-ad69-2b587887eefa
	I1109 10:31:13.041640   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:13.041980   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:13.041987   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:13.041994   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:13.041999   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:13.044189   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:13.044198   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:13.044204   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:13.044209   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:13.044214   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:13.044219   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:13.044223   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:13 GMT
	I1109 10:31:13.044228   29322 round_trippers.go:580]     Audit-Id: f5f133b0-b27f-4d4b-bc06-f336e87d6e47
	I1109 10:31:13.044280   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:13.537367   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:13.558119   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:13.558136   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:13.558150   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:13.561927   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:13.561941   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:13.561955   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:13 GMT
	I1109 10:31:13.561961   29322 round_trippers.go:580]     Audit-Id: ddfda5c6-9e1b-49db-aa32-fbae08a710f8
	I1109 10:31:13.561967   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:13.561974   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:13.561979   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:13.561984   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:13.562270   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:13.562560   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:13.562566   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:13.562572   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:13.562578   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:13.564444   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:13.564454   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:13.564459   29322 round_trippers.go:580]     Audit-Id: 9a04d481-3ffa-4e14-9794-7a6c9d40908b
	I1109 10:31:13.564464   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:13.564469   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:13.564474   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:13.564479   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:13.564487   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:13 GMT
	I1109 10:31:13.564814   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:14.039405   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:14.039428   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:14.039442   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:14.039453   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:14.043121   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:14.043144   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:14.043154   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:14.043160   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:14.043167   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:14.043174   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:14 GMT
	I1109 10:31:14.043180   29322 round_trippers.go:580]     Audit-Id: 56795e2f-8c6e-4057-9d2e-a4779f85a832
	I1109 10:31:14.043187   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:14.043263   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"986","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6604 chars]
	I1109 10:31:14.043637   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:14.043644   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:14.043650   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:14.043656   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:14.045641   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:14.045651   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:14.045656   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:14 GMT
	I1109 10:31:14.045661   29322 round_trippers.go:580]     Audit-Id: 64e6f601-7b65-4744-86e3-fe0eb676868c
	I1109 10:31:14.045666   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:14.045671   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:14.045678   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:14.045685   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:14.045823   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:14.539341   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:14.539361   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:14.539374   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:14.539383   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:14.543132   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:14.543147   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:14.543155   29322 round_trippers.go:580]     Audit-Id: 20a34b51-ac18-43a8-8435-4d32fc87bb5f
	I1109 10:31:14.543161   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:14.543169   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:14.543175   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:14.543188   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:14.543196   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:14 GMT
	I1109 10:31:14.543283   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:14.543664   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:14.543671   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:14.543677   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:14.543682   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:14.545442   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:14.545452   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:14.545458   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:14.545463   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:14.545468   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:14 GMT
	I1109 10:31:14.545473   29322 round_trippers.go:580]     Audit-Id: c874ff18-df68-4a94-86c9-9e3af3d78370
	I1109 10:31:14.545478   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:14.545482   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:14.545534   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:14.545719   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:15.038493   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:15.038513   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:15.038531   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:15.038585   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:15.042183   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:15.042196   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:15.042203   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:15 GMT
	I1109 10:31:15.042210   29322 round_trippers.go:580]     Audit-Id: 7295c00e-9d84-46e5-87cf-1c4b94168b7c
	I1109 10:31:15.042216   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:15.042223   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:15.042230   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:15.042236   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:15.042343   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:15.042743   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:15.042751   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:15.042757   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:15.042762   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:15.047950   29322 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1109 10:31:15.047961   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:15.047968   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:15.047973   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:15.047979   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:15.047983   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:15.047988   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:15 GMT
	I1109 10:31:15.047993   29322 round_trippers.go:580]     Audit-Id: bc0730a0-0f15-472d-a7bf-bb00cba9df66
	I1109 10:31:15.048057   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:15.539343   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:15.539365   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:15.539385   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:15.539429   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:15.543198   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:15.543213   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:15.543221   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:15.543227   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:15.543245   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:15.543252   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:15.543258   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:15 GMT
	I1109 10:31:15.543264   29322 round_trippers.go:580]     Audit-Id: 58fe1134-cc6c-451b-b309-1901404af2da
	I1109 10:31:15.543657   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:15.544517   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:15.544528   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:15.544538   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:15.544546   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:15.546842   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:15.546852   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:15.546858   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:15.546863   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:15 GMT
	I1109 10:31:15.546868   29322 round_trippers.go:580]     Audit-Id: 79bdfd40-aee9-4094-b929-9cf84efb694f
	I1109 10:31:15.546873   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:15.546877   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:15.546883   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:15.547071   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:16.039322   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:16.039345   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:16.039357   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:16.039368   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:16.043120   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:16.043137   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:16.043145   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:16 GMT
	I1109 10:31:16.043152   29322 round_trippers.go:580]     Audit-Id: 7d02b36d-3780-43e4-879a-917456fe14b9
	I1109 10:31:16.043161   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:16.043167   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:16.043173   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:16.043180   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:16.043269   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:16.043696   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:16.043703   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:16.043710   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:16.043715   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:16.045563   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:16.045573   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:16.045579   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:16.045585   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:16.045591   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:16.045596   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:16.045600   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:16 GMT
	I1109 10:31:16.045605   29322 round_trippers.go:580]     Audit-Id: 67719710-afc1-46e7-ac0c-2b0223786666
	I1109 10:31:16.045801   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:16.537395   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:16.537418   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:16.537431   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:16.537441   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:16.541163   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:16.541186   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:16.541198   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:16.541208   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:16.541217   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:16.541223   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:16 GMT
	I1109 10:31:16.541229   29322 round_trippers.go:580]     Audit-Id: e8f4d533-8ba4-4194-bde5-3e0766997228
	I1109 10:31:16.541237   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:16.541345   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:16.541662   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:16.541670   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:16.541676   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:16.541688   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:16.543532   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:16.543546   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:16.543574   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:16.543582   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:16.543587   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:16.543592   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:16 GMT
	I1109 10:31:16.543600   29322 round_trippers.go:580]     Audit-Id: 74678dc9-0b76-488a-b4bf-6b60b3991d71
	I1109 10:31:16.543606   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:16.543808   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:17.037367   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:17.037393   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:17.037405   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:17.037419   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:17.041448   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:17.041460   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:17.041465   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:17.041473   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:17.041478   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:17.041483   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:17.041487   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:17 GMT
	I1109 10:31:17.041493   29322 round_trippers.go:580]     Audit-Id: 89ff4584-216d-47c9-b463-b8ca8e440134
	I1109 10:31:17.041547   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:17.041833   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:17.041839   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:17.041846   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:17.041851   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:17.043870   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:17.043880   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:17.043886   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:17.043891   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:17.043896   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:17 GMT
	I1109 10:31:17.043901   29322 round_trippers.go:580]     Audit-Id: 9b7d2f6a-c665-4e94-b99a-f26d28212e61
	I1109 10:31:17.043906   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:17.043911   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:17.043956   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:17.044133   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:17.537355   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:17.537379   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:17.537392   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:17.537403   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:17.540795   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:17.540811   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:17.540820   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:17.540826   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:17.540833   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:17.540840   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:17 GMT
	I1109 10:31:17.540846   29322 round_trippers.go:580]     Audit-Id: 19770046-1610-4c36-9ed3-0639b14fa8ef
	I1109 10:31:17.540852   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:17.540949   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:17.541277   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:17.541283   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:17.541289   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:17.541294   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:17.543125   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:17.543136   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:17.543141   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:17 GMT
	I1109 10:31:17.543147   29322 round_trippers.go:580]     Audit-Id: 6dfd279f-2895-423b-a34d-1ea7aa6d494a
	I1109 10:31:17.543152   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:17.543157   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:17.543162   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:17.543166   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:17.543215   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:18.037606   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:18.037630   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:18.037643   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:18.037653   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:18.040964   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:18.040980   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:18.040987   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:18.040994   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:18.041000   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:18.041006   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:18.041016   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:18 GMT
	I1109 10:31:18.041023   29322 round_trippers.go:580]     Audit-Id: 5a254ee7-624e-4812-bef7-b25d95372942
	I1109 10:31:18.041408   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:18.041761   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:18.041769   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:18.041777   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:18.041782   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:18.043528   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:18.043538   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:18.043544   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:18.043552   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:18.043557   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:18 GMT
	I1109 10:31:18.043562   29322 round_trippers.go:580]     Audit-Id: c09d6bf8-3033-4b51-87eb-a4cea66ca9c0
	I1109 10:31:18.043569   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:18.043574   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:18.043619   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:18.537272   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:18.558965   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:18.558988   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:18.558999   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:18.562923   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:18.562937   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:18.562945   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:18.562952   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:18.562959   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:18.562966   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:18 GMT
	I1109 10:31:18.562973   29322 round_trippers.go:580]     Audit-Id: c73ea383-9db8-4997-aa83-0d7e7defc95f
	I1109 10:31:18.562979   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:18.563054   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:18.563446   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:18.563452   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:18.563458   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:18.563464   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:18.565559   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:18.565569   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:18.565574   29322 round_trippers.go:580]     Audit-Id: 1f7a7220-015b-48cd-85af-0338196439ee
	I1109 10:31:18.565581   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:18.565586   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:18.565591   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:18.565596   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:18.565600   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:18 GMT
	I1109 10:31:18.565806   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:19.037704   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:19.037727   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:19.037740   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:19.037750   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:19.041220   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:19.041236   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:19.041243   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:19 GMT
	I1109 10:31:19.041250   29322 round_trippers.go:580]     Audit-Id: 7f0a2e74-69af-4abb-a8cd-b7026f5f3146
	I1109 10:31:19.041256   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:19.041263   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:19.041269   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:19.041275   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:19.041692   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:19.042078   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:19.042085   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:19.042091   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:19.042096   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:19.043850   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:19.043859   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:19.043865   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:19.043870   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:19 GMT
	I1109 10:31:19.043875   29322 round_trippers.go:580]     Audit-Id: d80df721-9f64-494d-b981-1d67059701c9
	I1109 10:31:19.043879   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:19.043884   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:19.043891   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:19.043943   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:19.044123   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:19.537264   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:19.537285   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:19.537298   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:19.537308   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:19.541005   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:19.541025   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:19.541036   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:19 GMT
	I1109 10:31:19.541045   29322 round_trippers.go:580]     Audit-Id: 55b23125-a937-4e13-a1cf-57d66fcbe7a6
	I1109 10:31:19.541054   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:19.541064   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:19.541073   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:19.541081   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:19.541485   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:19.541779   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:19.541786   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:19.541792   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:19.541797   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:19.543600   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:19.543613   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:19.543619   29322 round_trippers.go:580]     Audit-Id: f7156a6a-5ddc-4f69-ad81-f5a48d5522c0
	I1109 10:31:19.543624   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:19.543629   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:19.543634   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:19.543641   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:19.543646   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:19 GMT
	I1109 10:31:19.543993   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:20.037954   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:20.037976   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:20.037989   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:20.038001   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:20.041834   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:20.041848   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:20.041855   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:20.041862   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:20.041869   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:20.041875   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:20 GMT
	I1109 10:31:20.041881   29322 round_trippers.go:580]     Audit-Id: b241b992-06b9-4c52-8a1f-e90c3e9cf7a5
	I1109 10:31:20.041888   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:20.041952   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:20.042322   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:20.042331   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:20.042339   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:20.042346   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:20.044447   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:20.044456   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:20.044462   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:20.044467   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:20 GMT
	I1109 10:31:20.044472   29322 round_trippers.go:580]     Audit-Id: 3e271430-ef86-43eb-8807-6dbac4fefa63
	I1109 10:31:20.044478   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:20.044482   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:20.044487   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:20.044531   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:20.538760   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:20.538783   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:20.538795   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:20.538805   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:20.542449   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:20.542464   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:20.542473   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:20.542479   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:20.542485   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:20.542493   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:20.542499   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:20 GMT
	I1109 10:31:20.542506   29322 round_trippers.go:580]     Audit-Id: 8574b39c-cb81-4884-b855-9feaebab2bdd
	I1109 10:31:20.542588   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:20.542966   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:20.542972   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:20.542978   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:20.542983   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:20.544704   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:20.544715   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:20.544722   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:20.544727   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:20 GMT
	I1109 10:31:20.544732   29322 round_trippers.go:580]     Audit-Id: 228808bc-e64a-4c81-8249-c4b9633c61f3
	I1109 10:31:20.544755   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:20.544764   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:20.544770   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:20.544838   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:21.037607   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:21.037653   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:21.037679   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:21.037686   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:21.040590   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:21.040602   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:21.040608   29322 round_trippers.go:580]     Audit-Id: 13f24be9-4b98-45a9-be86-288680789fe0
	I1109 10:31:21.040613   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:21.040618   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:21.040623   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:21.040628   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:21.040633   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:21 GMT
	I1109 10:31:21.040716   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:21.041009   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:21.041016   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:21.041022   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:21.041027   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:21.044014   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:21.044023   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:21.044031   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:21 GMT
	I1109 10:31:21.044038   29322 round_trippers.go:580]     Audit-Id: f3a0ab83-a8cb-4016-8b24-2fd9ff068801
	I1109 10:31:21.044044   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:21.044049   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:21.044053   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:21.044058   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:21.044384   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:21.044571   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:21.539201   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:21.539225   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:21.539239   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:21.539249   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:21.543261   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:21.543278   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:21.543286   29322 round_trippers.go:580]     Audit-Id: 79308c26-ad97-43d2-aa7f-dd56caf2a8ee
	I1109 10:31:21.543293   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:21.543299   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:21.543306   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:21.543313   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:21.543321   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:21 GMT
	I1109 10:31:21.543405   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:21.543782   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:21.543792   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:21.543801   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:21.543807   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:21.545567   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:21.545576   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:21.545581   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:21 GMT
	I1109 10:31:21.545586   29322 round_trippers.go:580]     Audit-Id: b47eb9a8-241d-40f3-be42-f3ec8653709c
	I1109 10:31:21.545591   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:21.545596   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:21.545601   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:21.545605   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:21.545644   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:22.039058   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:22.039080   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:22.039092   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:22.039102   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:22.042639   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:22.042652   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:22.042660   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:22.042667   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:22.042674   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:22.042682   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:22 GMT
	I1109 10:31:22.042688   29322 round_trippers.go:580]     Audit-Id: 31049b80-c4f3-458f-9961-645c61c01f13
	I1109 10:31:22.042695   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:22.042780   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:22.043152   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:22.043162   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:22.043170   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:22.043192   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:22.044928   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:22.044938   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:22.044943   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:22 GMT
	I1109 10:31:22.044948   29322 round_trippers.go:580]     Audit-Id: ab1133ae-333b-46ae-9710-2ca3ac93a902
	I1109 10:31:22.044954   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:22.044958   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:22.044963   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:22.044968   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:22.045265   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:22.537634   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:22.537655   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:22.537667   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:22.537678   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:22.541493   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:22.541507   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:22.541515   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:22.541521   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:22.541527   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:22 GMT
	I1109 10:31:22.541533   29322 round_trippers.go:580]     Audit-Id: f59afa58-62fa-4b4f-ad29-56d2ac96eca5
	I1109 10:31:22.541540   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:22.541547   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:22.541754   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:22.542073   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:22.542081   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:22.542087   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:22.542093   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:22.544210   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:22.544220   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:22.544226   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:22.544231   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:22.544236   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:22.544241   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:22 GMT
	I1109 10:31:22.544246   29322 round_trippers.go:580]     Audit-Id: fa3181b3-9b27-495c-9721-0a40ec1987fc
	I1109 10:31:22.544251   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:22.544388   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:23.039102   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:23.039130   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:23.039142   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:23.039152   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:23.043001   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:23.043016   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:23.043023   29322 round_trippers.go:580]     Audit-Id: 4242578c-4fc4-42fb-b273-9b98f23a40e9
	I1109 10:31:23.043030   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:23.043065   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:23.043071   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:23.043079   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:23.043085   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:23 GMT
	I1109 10:31:23.043369   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:23.043727   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:23.043734   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:23.043740   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:23.043747   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:23.045641   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:23.045651   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:23.045656   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:23.045661   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:23 GMT
	I1109 10:31:23.045666   29322 round_trippers.go:580]     Audit-Id: 1a934e76-2e59-4da4-8c0f-3821188f3d46
	I1109 10:31:23.045670   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:23.045675   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:23.045679   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:23.045911   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:23.046093   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:23.538389   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:23.560007   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:23.560028   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:23.560045   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:23.563769   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:23.563785   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:23.563793   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:23.563800   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:23 GMT
	I1109 10:31:23.563806   29322 round_trippers.go:580]     Audit-Id: 15c536c0-872d-4602-9033-5da1ef6085fb
	I1109 10:31:23.563813   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:23.563821   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:23.563828   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:23.563900   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:23.564276   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:23.564286   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:23.564294   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:23.564301   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:23.566173   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:23.566185   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:23.566194   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:23.566200   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:23.566206   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:23.566214   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:23 GMT
	I1109 10:31:23.566219   29322 round_trippers.go:580]     Audit-Id: e94ab873-969d-4056-b114-cfddb0f5bc30
	I1109 10:31:23.566228   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:23.566273   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:24.037923   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:24.037949   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:24.037962   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:24.037971   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:24.041889   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:24.041907   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:24.041915   29322 round_trippers.go:580]     Audit-Id: c00c5a61-38e8-4636-afcb-df4038460ae8
	I1109 10:31:24.041922   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:24.041928   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:24.041935   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:24.041941   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:24.041954   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:24 GMT
	I1109 10:31:24.042026   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:24.042405   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:24.042414   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:24.042425   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:24.042433   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:24.044468   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:24.044477   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:24.044482   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:24.044487   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:24.044492   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:24 GMT
	I1109 10:31:24.044497   29322 round_trippers.go:580]     Audit-Id: 6075aab7-d913-44f5-9943-f56225c923f3
	I1109 10:31:24.044501   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:24.044506   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:24.044547   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:24.537410   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:24.537439   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:24.537452   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:24.537462   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:24.541470   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:24.541490   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:24.541500   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:24.541513   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:24 GMT
	I1109 10:31:24.541524   29322 round_trippers.go:580]     Audit-Id: 8b58d677-8064-4da2-98d3-aec952625b5b
	I1109 10:31:24.541531   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:24.541538   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:24.541546   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:24.541630   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:24.541946   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:24.541952   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:24.541958   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:24.541964   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:24.544390   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:24.544400   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:24.544405   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:24.544410   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:24.544415   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:24.544420   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:24 GMT
	I1109 10:31:24.544425   29322 round_trippers.go:580]     Audit-Id: 85cbe0af-09cb-4389-8b76-4e0227a4a5b3
	I1109 10:31:24.544430   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:24.544557   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:25.039114   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:25.039137   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:25.039150   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:25.039160   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:25.043286   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:25.043303   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:25.043311   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:25 GMT
	I1109 10:31:25.043317   29322 round_trippers.go:580]     Audit-Id: 4ce982a8-6352-4b65-8810-34ef2d6cbe0e
	I1109 10:31:25.043323   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:25.043344   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:25.043362   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:25.043370   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:25.043590   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:25.043938   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:25.043944   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:25.043951   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:25.043957   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:25.045465   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:25.045479   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:25.045488   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:25 GMT
	I1109 10:31:25.045497   29322 round_trippers.go:580]     Audit-Id: adb18846-dc4d-4514-97e5-f8fd47f6cdaf
	I1109 10:31:25.045506   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:25.045514   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:25.045523   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:25.045530   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:25.045591   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:25.539042   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:25.539070   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:25.539082   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:25.539093   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:25.542959   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:25.542974   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:25.542982   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:25.542989   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:25 GMT
	I1109 10:31:25.542997   29322 round_trippers.go:580]     Audit-Id: 33b75cb4-ed56-4795-ab13-5dee43b9ab8f
	I1109 10:31:25.543004   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:25.543011   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:25.543017   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:25.543083   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:25.543477   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:25.543487   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:25.543495   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:25.543503   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:25.545588   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:25.545620   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:25.545629   29322 round_trippers.go:580]     Audit-Id: 6b0419be-bf94-4ca5-a6b1-c8b64071ab3b
	I1109 10:31:25.545640   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:25.545648   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:25.545655   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:25.545663   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:25.545669   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:25 GMT
	I1109 10:31:25.545868   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:25.546046   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:26.037505   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:26.037530   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:26.037543   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:26.037599   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:26.041250   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:26.041265   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:26.041273   29322 round_trippers.go:580]     Audit-Id: f57f4c25-1d8b-44e4-89e3-a7c84bb46e56
	I1109 10:31:26.041279   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:26.041286   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:26.041292   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:26.041299   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:26.041305   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:26 GMT
	I1109 10:31:26.041387   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:26.041685   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:26.041692   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:26.041698   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:26.041705   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:26.043575   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:26.043585   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:26.043591   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:26.043597   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:26.043602   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:26.043607   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:26 GMT
	I1109 10:31:26.043612   29322 round_trippers.go:580]     Audit-Id: 091ace2a-9c49-49c1-85e1-de080e5527c7
	I1109 10:31:26.043617   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:26.043651   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:26.538386   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:26.538408   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:26.538421   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:26.538430   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:26.541663   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:26.541677   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:26.541685   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:26.541696   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:26.541706   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:26 GMT
	I1109 10:31:26.541716   29322 round_trippers.go:580]     Audit-Id: f319c7f2-dcf6-4272-8565-dd9e964d3a8c
	I1109 10:31:26.541726   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:26.541733   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:26.542223   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:26.542574   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:26.542580   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:26.542586   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:26.542592   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:26.544227   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:26.544237   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:26.544244   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:26.544269   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:26 GMT
	I1109 10:31:26.544277   29322 round_trippers.go:580]     Audit-Id: df86717c-8944-41d3-9c15-7a5e5550c00c
	I1109 10:31:26.544281   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:26.544286   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:26.544292   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:26.544710   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:27.038661   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:27.038685   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:27.038697   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:27.038708   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:27.042372   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:27.042388   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:27.042396   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:27.042402   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:27.042424   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:27.042434   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:27 GMT
	I1109 10:31:27.042441   29322 round_trippers.go:580]     Audit-Id: d9f13960-c505-41d2-aba8-5d758fcc53b9
	I1109 10:31:27.042449   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:27.042512   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:27.042877   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:27.042883   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:27.042889   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:27.042895   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:27.044911   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:27.044921   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:27.044927   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:27.044932   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:27.044938   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:27.044943   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:27 GMT
	I1109 10:31:27.044948   29322 round_trippers.go:580]     Audit-Id: d0dc42a4-3ea4-4e97-a00f-9553dcf0ce0f
	I1109 10:31:27.044953   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:27.044992   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:27.539051   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:27.539113   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:27.539126   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:27.539139   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:27.543073   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:27.543090   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:27.543098   29322 round_trippers.go:580]     Audit-Id: 770c813b-9187-40da-bee6-fb7d8c4f2f97
	I1109 10:31:27.543104   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:27.543111   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:27.543119   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:27.543133   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:27.543145   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:27 GMT
	I1109 10:31:27.543212   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:27.543604   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:27.543614   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:27.543624   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:27.543631   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:27.545542   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:27.545551   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:27.545557   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:27.545562   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:27.545567   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:27.545572   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:27 GMT
	I1109 10:31:27.545576   29322 round_trippers.go:580]     Audit-Id: 3e759e8e-450a-4c2e-9ade-1210d49d4510
	I1109 10:31:27.545581   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:27.545615   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:28.037178   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:28.037202   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:28.037215   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:28.037225   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:28.040985   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:28.041000   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:28.041007   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:28.041014   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:28 GMT
	I1109 10:31:28.041021   29322 round_trippers.go:580]     Audit-Id: e28a1813-98d6-4e83-af9e-6d6463e72a3f
	I1109 10:31:28.041046   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:28.041057   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:28.041063   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:28.041133   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:28.041467   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:28.041474   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:28.041479   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:28.041485   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:28.043313   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:28.043323   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:28.043330   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:28.043339   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:28.043345   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:28.043351   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:28 GMT
	I1109 10:31:28.043358   29322 round_trippers.go:580]     Audit-Id: 45436f58-4394-40f1-8107-28f8e578100d
	I1109 10:31:28.043371   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:28.043524   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:28.043704   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:28.538150   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:28.559908   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:28.559925   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:28.559936   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:28.563849   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:28.563864   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:28.563871   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:28.563878   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:28.563885   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:28.563891   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:28 GMT
	I1109 10:31:28.563898   29322 round_trippers.go:580]     Audit-Id: d3bb2d7b-77a8-4827-8520-2ad9a9fca3db
	I1109 10:31:28.563905   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:28.563971   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:28.564351   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:28.564357   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:28.564363   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:28.564368   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:28.566281   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:28.566291   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:28.566296   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:28.566301   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:28.566307   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:28.566311   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:28 GMT
	I1109 10:31:28.566316   29322 round_trippers.go:580]     Audit-Id: aa4af8ea-f700-4156-9523-7de556a536a9
	I1109 10:31:28.566321   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:28.566353   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:29.037157   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:29.037181   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:29.037193   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:29.037204   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:29.040645   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:29.040662   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:29.040671   29322 round_trippers.go:580]     Audit-Id: f0c9a0b5-6349-4170-922c-9b479caaf39e
	I1109 10:31:29.040677   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:29.040684   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:29.040691   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:29.040697   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:29.040704   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:29 GMT
	I1109 10:31:29.040770   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:29.041147   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:29.041155   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:29.041164   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:29.041171   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:29.043066   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:29.043076   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:29.043081   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:29.043089   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:29.043094   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:29.043101   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:29.043105   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:29 GMT
	I1109 10:31:29.043110   29322 round_trippers.go:580]     Audit-Id: 66be2469-7220-47ec-9365-83ac84d8ae1a
	I1109 10:31:29.043166   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:29.537066   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:29.537090   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:29.537104   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:29.537119   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:29.540288   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:29.540300   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:29.540306   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:29.540310   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:29.540315   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:29 GMT
	I1109 10:31:29.540322   29322 round_trippers.go:580]     Audit-Id: b6c56e7e-c138-448a-a6d0-ae34d1a0568c
	I1109 10:31:29.540328   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:29.540334   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:29.540449   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:29.540746   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:29.540753   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:29.540759   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:29.540764   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:29.542869   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:29.542880   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:29.542886   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:29.542890   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:29 GMT
	I1109 10:31:29.542895   29322 round_trippers.go:580]     Audit-Id: 5a2748d7-f4ab-4b4b-8b95-e1070688bcdb
	I1109 10:31:29.542900   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:29.542904   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:29.542910   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:29.542962   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:30.038844   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:30.038872   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:30.038885   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:30.038894   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:30.042862   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:30.042880   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:30.042887   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:30.042893   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:30.042899   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:30.042906   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:30 GMT
	I1109 10:31:30.042913   29322 round_trippers.go:580]     Audit-Id: 9976e71e-fd62-4295-bce4-adb96423044b
	I1109 10:31:30.042919   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:30.043010   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:30.043387   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:30.043393   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:30.043399   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:30.043405   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:30.044981   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:30.044990   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:30.044999   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:30 GMT
	I1109 10:31:30.045005   29322 round_trippers.go:580]     Audit-Id: 98510584-9289-4dc8-9e8f-919a19b715c9
	I1109 10:31:30.045010   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:30.045014   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:30.045019   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:30.045024   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:30.045066   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:30.045238   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:30.537304   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:30.537326   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:30.537339   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:30.537349   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:30.541359   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:30.541370   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:30.541375   29322 round_trippers.go:580]     Audit-Id: 43e58940-c157-4c07-9a91-3546ce4517be
	I1109 10:31:30.541380   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:30.541384   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:30.541389   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:30.541394   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:30.541398   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:30 GMT
	I1109 10:31:30.541447   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:30.541733   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:30.541740   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:30.541745   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:30.541751   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:30.543712   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:30.543722   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:30.543727   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:30 GMT
	I1109 10:31:30.543732   29322 round_trippers.go:580]     Audit-Id: ad0c876b-9e09-4b53-8e55-2ada1d4ef210
	I1109 10:31:30.543738   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:30.543742   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:30.543749   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:30.543756   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:30.543895   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:31.037796   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:31.037818   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:31.037831   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:31.037841   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:31.041171   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:31.041186   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:31.041194   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:31.041200   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:31.041206   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:31.041214   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:31 GMT
	I1109 10:31:31.041223   29322 round_trippers.go:580]     Audit-Id: cf62655f-f1cd-47b2-ad2f-6276ad98caad
	I1109 10:31:31.041229   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:31.041287   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:31.041628   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:31.041635   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:31.041641   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:31.041661   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:31.043599   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:31.043609   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:31.043614   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:31.043619   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:31 GMT
	I1109 10:31:31.043624   29322 round_trippers.go:580]     Audit-Id: 9a1c16ba-10af-4889-a6d7-52b4fbd870f6
	I1109 10:31:31.043628   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:31.043633   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:31.043638   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:31.043671   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:31.537031   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:31.537058   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:31.537071   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:31.537081   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:31.540881   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:31.540896   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:31.540904   29322 round_trippers.go:580]     Audit-Id: 0bb11d33-096e-421b-85a3-11a741bf646d
	I1109 10:31:31.540911   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:31.540918   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:31.540925   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:31.540931   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:31.540937   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:31 GMT
	I1109 10:31:31.541006   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:31.541286   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:31.541293   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:31.541299   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:31.541304   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:31.543027   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:31.543035   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:31.543040   29322 round_trippers.go:580]     Audit-Id: 3b1e896f-8900-47b3-8366-e1b34cbd4d42
	I1109 10:31:31.543045   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:31.543051   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:31.543055   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:31.543060   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:31.543064   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:31 GMT
	I1109 10:31:31.543100   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:32.036820   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:32.036847   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:32.036859   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:32.036870   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:32.040491   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:32.040507   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:32.040514   29322 round_trippers.go:580]     Audit-Id: b259d728-ddeb-460f-82a5-85054765e0fb
	I1109 10:31:32.040521   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:32.040527   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:32.040534   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:32.040540   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:32.040546   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:32 GMT
	I1109 10:31:32.040881   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:32.041200   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:32.041207   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:32.041213   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:32.041219   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:32.043177   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:32.043186   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:32.043191   29322 round_trippers.go:580]     Audit-Id: 06a54d23-c67b-4a2f-9227-6f2690d09d47
	I1109 10:31:32.043196   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:32.043201   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:32.043206   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:32.043211   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:32.043215   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:32 GMT
	I1109 10:31:32.043338   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:32.536965   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:32.536986   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:32.536999   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:32.537009   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:32.542212   29322 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1109 10:31:32.542224   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:32.542231   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:32.542236   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:32 GMT
	I1109 10:31:32.542241   29322 round_trippers.go:580]     Audit-Id: 144227e5-fff2-49c0-888b-6fb6409f7aff
	I1109 10:31:32.542245   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:32.542250   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:32.542255   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:32.542305   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:32.542589   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:32.542595   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:32.542601   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:32.542606   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:32.544461   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:32.544471   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:32.544476   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:32.544481   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:32.544486   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:32 GMT
	I1109 10:31:32.544491   29322 round_trippers.go:580]     Audit-Id: 57a3641d-adfe-45dd-8d7d-599c12f2b8fb
	I1109 10:31:32.544497   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:32.544502   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:32.544541   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:32.544715   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:33.036832   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:33.036854   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:33.036867   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:33.036877   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:33.040718   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:33.040729   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:33.040734   29322 round_trippers.go:580]     Audit-Id: d2c1634f-e3a9-4f66-85f3-0d4ee0d03c64
	I1109 10:31:33.040739   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:33.040744   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:33.040749   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:33.040753   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:33.040758   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:33 GMT
	I1109 10:31:33.040818   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:33.041104   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:33.041111   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:33.041117   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:33.041122   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:33.042936   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:33.042946   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:33.042951   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:33.042956   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:33.042961   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:33 GMT
	I1109 10:31:33.042966   29322 round_trippers.go:580]     Audit-Id: 18bc118c-f002-4ece-92fa-eca8429200dd
	I1109 10:31:33.042971   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:33.042981   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:33.043018   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:33.536863   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:33.558588   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:33.558616   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:33.558630   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:33.562328   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:33.562343   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:33.562351   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:33 GMT
	I1109 10:31:33.562357   29322 round_trippers.go:580]     Audit-Id: c6860baf-a3f4-4414-b021-e43498a7de3d
	I1109 10:31:33.562365   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:33.562373   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:33.562381   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:33.562388   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:33.562765   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:33.563145   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:33.563154   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:33.563162   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:33.563206   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:33.565137   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:33.565145   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:33.565150   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:33.565154   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:33.565159   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:33.565164   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:33.565169   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:33 GMT
	I1109 10:31:33.565174   29322 round_trippers.go:580]     Audit-Id: 2da252fc-8036-4445-8ce3-aaf94817a633
	I1109 10:31:33.565209   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:34.036842   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:34.036864   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:34.036876   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:34.036885   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:34.039979   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:34.040008   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:34.040014   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:34 GMT
	I1109 10:31:34.040018   29322 round_trippers.go:580]     Audit-Id: a0b92930-5c69-4b0c-b0e8-552bfa5d1c3b
	I1109 10:31:34.040023   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:34.040027   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:34.040031   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:34.040036   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:34.040116   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:34.040390   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:34.040397   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:34.040402   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:34.040407   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:34.042252   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:34.042263   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:34.042270   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:34.042275   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:34 GMT
	I1109 10:31:34.042280   29322 round_trippers.go:580]     Audit-Id: f02a2745-067a-46d6-9c8d-1866720b0e16
	I1109 10:31:34.042286   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:34.042290   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:34.042295   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:34.042580   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:34.536995   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:34.537018   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:34.537032   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:34.537042   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:34.541234   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:34.541264   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:34.541272   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:34 GMT
	I1109 10:31:34.541277   29322 round_trippers.go:580]     Audit-Id: 841b494d-695c-4857-bf89-35c9cb669b9b
	I1109 10:31:34.541281   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:34.541286   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:34.541290   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:34.541294   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:34.541343   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:34.541642   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:34.541648   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:34.541654   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:34.541659   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:34.543553   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:34.543563   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:34.543568   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:34.543573   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:34 GMT
	I1109 10:31:34.543578   29322 round_trippers.go:580]     Audit-Id: b95b4155-eb9b-48aa-b56c-941da76c3d94
	I1109 10:31:34.543583   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:34.543588   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:34.543592   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:34.543919   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:35.037027   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:35.037049   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:35.037061   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:35.037071   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:35.041491   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:35.041514   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:35.041523   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:35.041531   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:35.041538   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:35.041545   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:35.041558   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:35 GMT
	I1109 10:31:35.041567   29322 round_trippers.go:580]     Audit-Id: 001817f9-9d58-4c99-8433-5384df9eade4
	I1109 10:31:35.041638   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:35.042065   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:35.042075   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:35.042084   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:35.042092   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:35.044392   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:35.044403   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:35.044408   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:35.044413   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:35.044418   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:35.044422   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:35.044427   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:35 GMT
	I1109 10:31:35.044432   29322 round_trippers.go:580]     Audit-Id: dd070cce-3b90-4546-a619-cc73a4d7f0f4
	I1109 10:31:35.044681   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:35.044869   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:35.536750   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:35.536771   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:35.536783   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:35.536793   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:35.540153   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:35.540172   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:35.540183   29322 round_trippers.go:580]     Audit-Id: 8a45ef4b-faf2-4baf-a63d-56fc7f9ff144
	I1109 10:31:35.540192   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:35.540202   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:35.540208   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:35.540214   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:35.540220   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:35 GMT
	I1109 10:31:35.540363   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:35.540739   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:35.540747   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:35.540755   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:35.540762   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:35.542801   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:35.542811   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:35.542817   29322 round_trippers.go:580]     Audit-Id: 6e4c511d-4457-4f2f-ba7b-344770a1b8b4
	I1109 10:31:35.542822   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:35.542826   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:35.542831   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:35.542836   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:35.542840   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:35 GMT
	I1109 10:31:35.543126   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:36.038073   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:36.038097   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:36.038111   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:36.038121   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:36.042124   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:36.042139   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:36.042147   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:36.042155   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:36 GMT
	I1109 10:31:36.042161   29322 round_trippers.go:580]     Audit-Id: 3225db78-f63b-495a-8e85-97774f4283e0
	I1109 10:31:36.042168   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:36.042174   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:36.042181   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:36.042258   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:36.042583   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:36.042589   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:36.042595   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:36.042602   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:36.044167   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:36.044177   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:36.044183   29322 round_trippers.go:580]     Audit-Id: f5a232db-25ad-4daf-b1dc-f733b9de4f1c
	I1109 10:31:36.044208   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:36.044218   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:36.044224   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:36.044230   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:36.044238   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:36 GMT
	I1109 10:31:36.044526   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:36.537197   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:36.537219   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:36.537232   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:36.537242   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:36.541001   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:36.541014   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:36.541020   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:36.541024   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:36.541029   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:36.541034   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:36.541038   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:36 GMT
	I1109 10:31:36.541043   29322 round_trippers.go:580]     Audit-Id: e38355c9-478e-401f-87c4-51d3b0afb5d5
	I1109 10:31:36.541134   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:36.541420   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:36.541426   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:36.541432   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:36.541437   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:36.543152   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:36.543169   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:36.543175   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:36.543180   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:36.543185   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:36.543191   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:36.543196   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:36 GMT
	I1109 10:31:36.543200   29322 round_trippers.go:580]     Audit-Id: b4cfe0e1-28b8-46e0-8cde-6b2967662db8
	I1109 10:31:36.543397   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:37.036890   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:37.036912   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:37.036925   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:37.036935   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:37.040461   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:37.040474   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:37.040480   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:37.040486   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:37.040492   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:37.040501   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:37.040509   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:37 GMT
	I1109 10:31:37.040517   29322 round_trippers.go:580]     Audit-Id: 1a40889a-a620-4802-9602-3cde5ecfb9d5
	I1109 10:31:37.040597   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:37.040893   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:37.040901   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:37.040907   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:37.040912   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:37.042756   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:37.042766   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:37.042772   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:37.042778   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:37.042783   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:37.042788   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:37 GMT
	I1109 10:31:37.042793   29322 round_trippers.go:580]     Audit-Id: 07c5b8e0-6c1e-4379-8556-d834a7a060f3
	I1109 10:31:37.042798   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:37.042842   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:37.537762   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:37.537785   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:37.537797   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:37.537807   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:37.541980   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:37.542019   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:37.542030   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:37 GMT
	I1109 10:31:37.542039   29322 round_trippers.go:580]     Audit-Id: f7b0f9b4-f8a5-400f-9ca5-bc9f05fc64a8
	I1109 10:31:37.542053   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:37.542061   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:37.542069   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:37.542081   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:37.542213   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:37.542919   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:37.542930   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:37.542937   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:37.542945   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:37.544922   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:37.544933   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:37.544939   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:37 GMT
	I1109 10:31:37.544944   29322 round_trippers.go:580]     Audit-Id: ee5f6cb4-c8dd-4d0f-a803-c9eb24ae12b8
	I1109 10:31:37.544950   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:37.544955   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:37.544960   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:37.544965   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:37.545291   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:37.545466   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:38.036926   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:38.036952   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:38.036964   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:38.036974   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:38.040779   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:38.040797   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:38.040805   29322 round_trippers.go:580]     Audit-Id: b9d2efcb-5089-48bc-bbf1-62e551743e2b
	I1109 10:31:38.040813   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:38.040820   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:38.040827   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:38.040834   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:38.040841   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:38 GMT
	I1109 10:31:38.040921   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:38.041308   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:38.041318   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:38.041326   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:38.041334   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:38.043463   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:38.043473   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:38.043479   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:38.043484   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:38.043489   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:38.043494   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:38.043499   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:38 GMT
	I1109 10:31:38.043504   29322 round_trippers.go:580]     Audit-Id: 4d8162ae-4c2c-439a-81bd-de82015be9e5
	I1109 10:31:38.043548   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:38.537356   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:38.559065   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:38.559083   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:38.559097   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:38.563093   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:38.563111   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:38.563119   29322 round_trippers.go:580]     Audit-Id: 60139ff2-603f-4cd0-98af-f8a35de4d921
	I1109 10:31:38.563146   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:38.563157   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:38.563164   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:38.563171   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:38.563178   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:38 GMT
	I1109 10:31:38.563256   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:38.563616   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:38.563623   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:38.563628   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:38.563634   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:38.565482   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:38.565493   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:38.565507   29322 round_trippers.go:580]     Audit-Id: db902ccb-d24e-4ba8-977b-2746ec39c137
	I1109 10:31:38.565513   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:38.565518   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:38.565523   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:38.565528   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:38.565533   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:38 GMT
	I1109 10:31:38.565582   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:39.036655   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:39.036724   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:39.036738   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:39.036752   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:39.039792   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:39.039807   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:39.039815   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:39.039823   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:39.039829   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:39.039840   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:39 GMT
	I1109 10:31:39.039848   29322 round_trippers.go:580]     Audit-Id: 0934efc0-2cfa-40a6-9fbd-34653c9d6076
	I1109 10:31:39.039854   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:39.039914   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:39.040194   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:39.040201   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:39.040207   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:39.040225   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:39.041773   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:39.041783   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:39.041789   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:39.041794   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:39.041799   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:39 GMT
	I1109 10:31:39.041805   29322 round_trippers.go:580]     Audit-Id: 1d4585c6-060c-4ea6-9351-27c84b8cd999
	I1109 10:31:39.041809   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:39.041814   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:39.041856   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:39.537333   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:39.537359   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:39.537399   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:39.537423   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:39.541535   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:39.541551   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:39.541559   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:39.541565   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:39 GMT
	I1109 10:31:39.541573   29322 round_trippers.go:580]     Audit-Id: 7b8aa59d-c7d2-421f-a1bd-230da0a63aa1
	I1109 10:31:39.541580   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:39.541587   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:39.541594   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:39.541674   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:39.542049   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:39.542058   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:39.542067   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:39.542088   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:39.544007   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:39.544016   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:39.544021   29322 round_trippers.go:580]     Audit-Id: a476bdd8-cd98-47d3-86d4-2cb557cfc75e
	I1109 10:31:39.544031   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:39.544036   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:39.544040   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:39.544045   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:39.544050   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:39 GMT
	I1109 10:31:39.544093   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:40.036879   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:40.036905   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:40.036918   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:40.036928   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:40.040896   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:40.040926   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:40.040933   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:40.040938   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:40 GMT
	I1109 10:31:40.040943   29322 round_trippers.go:580]     Audit-Id: 55224fad-ba88-4c81-85e6-44fd80bde8c7
	I1109 10:31:40.040949   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:40.040956   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:40.040963   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:40.041023   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:40.041337   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:40.041344   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:40.041350   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:40.041363   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:40.043078   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:40.043090   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:40.043098   29322 round_trippers.go:580]     Audit-Id: 40532329-031d-44e1-a10d-18a75b03266b
	I1109 10:31:40.043105   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:40.043110   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:40.043116   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:40.043120   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:40.043125   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:40 GMT
	I1109 10:31:40.043307   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:40.043484   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:40.536695   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:40.536721   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:40.536734   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:40.536743   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:40.540575   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:40.540590   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:40.540598   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:40.540604   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:40.540611   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:40 GMT
	I1109 10:31:40.540617   29322 round_trippers.go:580]     Audit-Id: f9a0b569-78e2-4176-8d0e-b9eb917d61a2
	I1109 10:31:40.540624   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:40.540631   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:40.540706   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:40.541035   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:40.541041   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:40.541047   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:40.541053   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:40.542755   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:40.542765   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:40.542770   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:40 GMT
	I1109 10:31:40.542775   29322 round_trippers.go:580]     Audit-Id: 4f0b38e6-dbd4-41a1-9cea-8d76268b4f15
	I1109 10:31:40.542779   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:40.542784   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:40.542789   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:40.542794   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:40.542904   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:41.036572   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:41.036599   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:41.036612   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:41.036622   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:41.039850   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:41.039859   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:41.039865   29322 round_trippers.go:580]     Audit-Id: 2fc7dba9-2a1c-49cc-af1f-87ee7d99d8ef
	I1109 10:31:41.039869   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:41.039876   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:41.039881   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:41.039886   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:41.039891   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:41 GMT
	I1109 10:31:41.040234   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:41.040516   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:41.040522   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:41.040528   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:41.040533   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:41.042672   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:41.042681   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:41.042686   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:41.042691   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:41.042696   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:41.042701   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:41.042705   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:41 GMT
	I1109 10:31:41.042712   29322 round_trippers.go:580]     Audit-Id: cb93996e-c5df-49ef-893a-d79ca72b5817
	I1109 10:31:41.043106   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:41.537112   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:41.537139   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:41.537152   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:41.537163   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:41.541243   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:41.541259   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:41.541267   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:41.541274   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:41 GMT
	I1109 10:31:41.541280   29322 round_trippers.go:580]     Audit-Id: cb9bf68d-4a84-4506-baeb-8736ddf3eed7
	I1109 10:31:41.541286   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:41.541337   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:41.541392   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:41.541484   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:41.541780   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:41.541787   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:41.541792   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:41.541797   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:41.543543   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:41.543553   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:41.543558   29322 round_trippers.go:580]     Audit-Id: 97e107ae-0215-47eb-a916-81340c90091e
	I1109 10:31:41.543563   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:41.543568   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:41.543573   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:41.543579   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:41.543586   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:41 GMT
	I1109 10:31:41.543706   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:42.036869   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:42.036888   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:42.036916   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:42.036926   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:42.039863   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:42.039873   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:42.039878   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:42.039884   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:42.039889   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:42.039894   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:42 GMT
	I1109 10:31:42.039899   29322 round_trippers.go:580]     Audit-Id: 5adec263-61c7-437d-9c8a-83d6b940f7c8
	I1109 10:31:42.039904   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:42.039964   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:42.040242   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:42.040249   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:42.040254   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:42.040259   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:42.041995   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:42.042005   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:42.042011   29322 round_trippers.go:580]     Audit-Id: 6e650d22-26ea-4ff9-a062-8a73a56c448d
	I1109 10:31:42.042016   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:42.042021   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:42.042028   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:42.042033   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:42.042038   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:42 GMT
	I1109 10:31:42.042088   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:42.537409   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:42.537432   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:42.537444   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:42.537454   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:42.540819   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:42.540833   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:42.540841   29322 round_trippers.go:580]     Audit-Id: 899ee5ac-bba4-4e45-9020-4382b9c55cdd
	I1109 10:31:42.540848   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:42.540854   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:42.540861   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:42.540867   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:42.540874   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:42 GMT
	I1109 10:31:42.541162   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:42.541529   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:42.541536   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:42.541543   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:42.541548   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:42.543405   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:42.543415   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:42.543421   29322 round_trippers.go:580]     Audit-Id: 3ba71663-9587-4d56-af1a-44dc6f26c009
	I1109 10:31:42.543426   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:42.543431   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:42.543435   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:42.543440   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:42.543446   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:42 GMT
	I1109 10:31:42.543499   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:42.543678   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:43.036823   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:43.036850   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:43.036862   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:43.036872   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:43.040633   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:43.040653   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:43.040661   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:43 GMT
	I1109 10:31:43.040668   29322 round_trippers.go:580]     Audit-Id: 4b0ad16c-19d6-4ab0-becd-e0090044785e
	I1109 10:31:43.040676   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:43.040683   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:43.040692   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:43.040701   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:43.040863   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:43.041255   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:43.041262   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:43.041268   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:43.041273   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:43.043084   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:43.043093   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:43.043099   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:43.043104   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:43.043109   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:43.043113   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:43 GMT
	I1109 10:31:43.043118   29322 round_trippers.go:580]     Audit-Id: d1986fec-e18b-4445-8a07-cb8e51765ffc
	I1109 10:31:43.043123   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:43.043172   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:43.536483   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:43.557331   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:43.557376   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:43.557391   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:43.561490   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:43.561505   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:43.561515   29322 round_trippers.go:580]     Audit-Id: 6fa30a43-0c2a-41a3-911f-c35ad411a163
	I1109 10:31:43.561522   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:43.561529   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:43.561535   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:43.561543   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:43.561549   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:43 GMT
	I1109 10:31:43.561644   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:43.561935   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:43.561941   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:43.561947   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:43.561953   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:43.563728   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:43.563737   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:43.563742   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:43.563747   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:43 GMT
	I1109 10:31:43.563752   29322 round_trippers.go:580]     Audit-Id: 9a9558e2-5cce-476e-b0c2-5cbb9063f38a
	I1109 10:31:43.563757   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:43.563763   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:43.563768   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:43.563805   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:44.036574   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:44.036602   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:44.036650   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:44.036663   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:44.040346   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:44.040359   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:44.040367   29322 round_trippers.go:580]     Audit-Id: 5222d96d-fa3d-4863-9a8b-19ac31674994
	I1109 10:31:44.040376   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:44.040384   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:44.040391   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:44.040398   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:44.040404   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:44 GMT
	I1109 10:31:44.040468   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:44.040751   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:44.040758   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:44.040764   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:44.040770   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:44.042553   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:44.042564   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:44.042570   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:44.042575   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:44.042580   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:44.042584   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:44.042589   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:44 GMT
	I1109 10:31:44.042595   29322 round_trippers.go:580]     Audit-Id: 02691973-6c37-4452-97f3-b4b2a9f58304
	I1109 10:31:44.042743   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:44.537649   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:44.537671   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:44.537684   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:44.537695   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:44.541874   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:44.541887   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:44.541897   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:44 GMT
	I1109 10:31:44.541903   29322 round_trippers.go:580]     Audit-Id: 621b0cbd-d244-4066-8d4c-52420d442952
	I1109 10:31:44.541911   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:44.541917   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:44.541923   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:44.541932   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:44.541992   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:44.542306   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:44.542313   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:44.542319   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:44.542324   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:44.543942   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:44.543950   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:44.543955   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:44.543962   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:44 GMT
	I1109 10:31:44.543968   29322 round_trippers.go:580]     Audit-Id: 7670d194-b614-4ae5-a540-8c0b9d06f403
	I1109 10:31:44.543973   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:44.543978   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:44.543982   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:44.544361   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:44.544540   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:45.036672   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:45.036696   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:45.036708   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:45.036718   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:45.039821   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:45.039831   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:45.039840   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:45.039848   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:45 GMT
	I1109 10:31:45.039853   29322 round_trippers.go:580]     Audit-Id: 2706f725-0f50-4b9c-83b8-cdc2cc406a79
	I1109 10:31:45.039857   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:45.039862   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:45.039867   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:45.039930   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:45.040215   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:45.040222   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:45.040228   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:45.040233   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:45.042282   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:45.042291   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:45.042296   29322 round_trippers.go:580]     Audit-Id: 82dbfdb5-db39-4d0d-9e6b-0a58d91edc63
	I1109 10:31:45.042301   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:45.042307   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:45.042316   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:45.042322   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:45.042326   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:45 GMT
	I1109 10:31:45.042364   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:45.538349   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:45.538375   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:45.538388   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:45.538398   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:45.542063   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:45.542078   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:45.542085   29322 round_trippers.go:580]     Audit-Id: 20db784e-e085-4a3a-be3c-f5c5c001140a
	I1109 10:31:45.542092   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:45.542099   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:45.542105   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:45.542112   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:45.542118   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:45 GMT
	I1109 10:31:45.542179   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:45.542481   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:45.542487   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:45.542493   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:45.542498   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:45.544375   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:45.544386   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:45.544392   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:45.544397   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:45 GMT
	I1109 10:31:45.544401   29322 round_trippers.go:580]     Audit-Id: 4bc65b0c-05aa-44aa-9194-694dce513a02
	I1109 10:31:45.544406   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:45.544411   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:45.544416   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:45.544449   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:46.036916   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:46.036943   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:46.036956   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:46.036966   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:46.040743   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:46.040759   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:46.040766   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:46.040780   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:46.040788   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:46 GMT
	I1109 10:31:46.040795   29322 round_trippers.go:580]     Audit-Id: bb7c85cf-10eb-41a2-a5b4-e173cb5547ab
	I1109 10:31:46.040801   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:46.040810   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:46.040869   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:46.041250   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:46.041257   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:46.041263   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:46.041269   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:46.043115   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:46.043124   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:46.043130   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:46.043135   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:46 GMT
	I1109 10:31:46.043140   29322 round_trippers.go:580]     Audit-Id: 52ce5708-8363-4304-bf71-0338ac2165b6
	I1109 10:31:46.043145   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:46.043150   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:46.043154   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:46.043189   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:46.538442   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:46.538464   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:46.538481   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:46.538492   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:46.542151   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:46.542168   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:46.542176   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:46.542184   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:46 GMT
	I1109 10:31:46.542211   29322 round_trippers.go:580]     Audit-Id: e168128b-3cbf-424a-99af-5849882aa0f5
	I1109 10:31:46.542223   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:46.542230   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:46.542243   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:46.542503   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:46.542867   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:46.542873   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:46.542879   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:46.542885   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:46.545015   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:46.545025   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:46.545031   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:46.545036   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:46.545041   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:46.545045   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:46.545049   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:46 GMT
	I1109 10:31:46.545054   29322 round_trippers.go:580]     Audit-Id: 1b406548-09f0-4549-8ea3-e7b0756e2b07
	I1109 10:31:46.545089   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:46.545297   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:47.037254   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:47.037282   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:47.037333   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:47.037346   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:47.040984   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:47.041000   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:47.041008   29322 round_trippers.go:580]     Audit-Id: 49c5d17f-4e21-4678-89a0-30bbdeb09aef
	I1109 10:31:47.041014   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:47.041021   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:47.041027   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:47.041035   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:47.041041   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:47 GMT
	I1109 10:31:47.041433   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:47.041714   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:47.041722   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:47.041728   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:47.041733   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:47.043592   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:47.043601   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:47.043606   29322 round_trippers.go:580]     Audit-Id: 68b7d886-9777-4bf1-9a47-bd0eb57a0bbc
	I1109 10:31:47.043611   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:47.043617   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:47.043621   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:47.043626   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:47.043631   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:47 GMT
	I1109 10:31:47.043665   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:47.536538   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:47.536561   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:47.536573   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:47.536583   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:47.540233   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:47.540249   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:47.540260   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:47.540269   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:47.540281   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:47 GMT
	I1109 10:31:47.540291   29322 round_trippers.go:580]     Audit-Id: 9960ada3-e31d-4ad9-8d08-6d76acc22053
	I1109 10:31:47.540301   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:47.540313   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:47.540392   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:47.540687   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:47.540695   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:47.540701   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:47.540707   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:47.542667   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:47.542678   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:47.542684   29322 round_trippers.go:580]     Audit-Id: 7b43c254-7046-4909-b723-b066ae036071
	I1109 10:31:47.542689   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:47.542695   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:47.542700   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:47.542705   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:47.542710   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:47 GMT
	I1109 10:31:47.542743   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:48.036562   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:48.036589   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:48.036601   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:48.036611   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:48.040071   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:48.040089   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:48.040099   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:48 GMT
	I1109 10:31:48.040106   29322 round_trippers.go:580]     Audit-Id: a93b1c13-e8c1-482b-a7f2-2c3dce2bd92e
	I1109 10:31:48.040112   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:48.040118   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:48.040125   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:48.040133   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:48.040422   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:48.040701   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:48.040709   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:48.040715   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:48.040720   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:48.042596   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:48.042606   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:48.042611   29322 round_trippers.go:580]     Audit-Id: f0fb3d1b-c2ab-4de7-9add-e6536466d99b
	I1109 10:31:48.042616   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:48.042621   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:48.042625   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:48.042630   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:48.042635   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:48 GMT
	I1109 10:31:48.042671   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:48.536840   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:48.559516   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:48.559527   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:48.559541   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:48.562439   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:48.562450   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:48.562456   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:48.562460   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:48.562466   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:48 GMT
	I1109 10:31:48.562470   29322 round_trippers.go:580]     Audit-Id: 866c18cd-33f4-492b-a91a-67eda5b68284
	I1109 10:31:48.562475   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:48.562479   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:48.562526   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1020","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6782 chars]
	I1109 10:31:48.562807   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:48.562813   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:48.562819   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:48.562825   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:48.564641   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:48.564650   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:48.564655   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:48.564660   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:48.564665   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:48 GMT
	I1109 10:31:48.564671   29322 round_trippers.go:580]     Audit-Id: 56e50d0a-24fd-46c0-be60-76c519e69a6c
	I1109 10:31:48.564676   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:48.564681   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:48.564713   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:48.564889   29322 pod_ready.go:102] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"False"
	I1109 10:31:49.036383   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:49.036462   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.036474   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.036484   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.039722   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:49.039734   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.039745   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.039755   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.039761   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.039765   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.039771   29322 round_trippers.go:580]     Audit-Id: ff1e9d97-77bd-4aa5-924a-358b661f0b01
	I1109 10:31:49.039775   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.039979   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1073","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6553 chars]
	I1109 10:31:49.040259   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:49.040266   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.040272   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.040277   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.042694   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:49.042704   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.042710   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.042716   29322 round_trippers.go:580]     Audit-Id: 4d2b433b-dd92-41c0-9a61-4323ed1e9045
	I1109 10:31:49.042721   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.042727   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.042731   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.042736   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.042774   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:49.042951   29322 pod_ready.go:92] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.042962   29322 pod_ready.go:81] duration metric: took 38.511815615s waiting for pod "coredns-565d847f94-fx6lt" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.042970   29322 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.042997   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/etcd-multinode-102528
	I1109 10:31:49.043001   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.043008   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.043014   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.044825   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.044833   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.044844   29322 round_trippers.go:580]     Audit-Id: cdfff0ca-5d2a-49f1-9cb2-42f9c7f7208c
	I1109 10:31:49.044851   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.044856   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.044862   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.044870   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.044877   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.045039   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102528","namespace":"kube-system","uid":"5dde8340-2916-4da6-91aa-ea6dfe24a5ad","resourceVersion":"1041","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.mirror":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.seen":"2022-11-09T18:25:54.343403314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6046 chars]
	I1109 10:31:49.045263   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:49.045270   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.045276   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.045282   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.047218   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.047227   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.047232   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.047237   29322 round_trippers.go:580]     Audit-Id: a747d2d2-59d5-4804-98ef-b75ef054f903
	I1109 10:31:49.047244   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.047253   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.047259   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.047270   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.047475   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:49.047643   29322 pod_ready.go:92] pod "etcd-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.047650   29322 pod_ready.go:81] duration metric: took 4.674233ms waiting for pod "etcd-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.047659   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.047683   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102528
	I1109 10:31:49.047687   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.047693   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.047699   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.049519   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.049531   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.049536   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.049541   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.049546   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.049550   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.049556   29322 round_trippers.go:580]     Audit-Id: d69b7d01-a93c-493e-b527-40eb3945b564
	I1109 10:31:49.049563   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.049730   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-102528","namespace":"kube-system","uid":"f48fa313-e8ec-42bc-87bc-7daede794fe2","resourceVersion":"1029","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b110864cd1ed66678c31ad09d14c41ec","kubernetes.io/config.mirror":"b110864cd1ed66678c31ad09d14c41ec","kubernetes.io/config.seen":"2022-11-09T18:25:54.343403906Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8428 chars]
	I1109 10:31:49.049984   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:49.049991   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.049997   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.050003   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.051638   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.051646   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.051651   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.051656   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.051661   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.051665   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.051671   29322 round_trippers.go:580]     Audit-Id: 271b4e64-19ae-4b51-88c7-7571edbafde1
	I1109 10:31:49.051675   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.051888   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:49.052066   29322 pod_ready.go:92] pod "kube-apiserver-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.052072   29322 pod_ready.go:81] duration metric: took 4.408129ms waiting for pod "kube-apiserver-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.052078   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.052110   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102528
	I1109 10:31:49.052116   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.052122   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.052127   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.054137   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:49.054146   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.054151   29322 round_trippers.go:580]     Audit-Id: e4b150df-5b2e-4f43-b903-08847b9eae86
	I1109 10:31:49.054156   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.054161   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.054165   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.054170   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.054175   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.054306   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-102528","namespace":"kube-system","uid":"3dd056ba-22b5-4b0c-aa7e-9e00d215df9a","resourceVersion":"1035","creationTimestamp":"2022-11-09T18:25:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ec9e561364ffe02db1e38ab82ddc699b","kubernetes.io/config.mirror":"ec9e561364ffe02db1e38ab82ddc699b","kubernetes.io/config.seen":"2022-11-09T18:25:43.900701692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8005 chars]
	I1109 10:31:49.054552   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:49.054559   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.054565   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.054570   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.056172   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.056180   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.056185   29322 round_trippers.go:580]     Audit-Id: dafc6e5f-b537-4c60-bdff-60ca3bf3983d
	I1109 10:31:49.056190   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.056194   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.056199   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.056203   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.056208   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.056369   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:49.056537   29322 pod_ready.go:92] pod "kube-controller-manager-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.056544   29322 pod_ready.go:81] duration metric: took 4.461605ms waiting for pod "kube-controller-manager-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.056551   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9wsxp" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.056575   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-9wsxp
	I1109 10:31:49.056580   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.056586   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.056591   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.058233   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.058241   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.058246   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.058251   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.058256   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.058261   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.058266   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.058271   29322 round_trippers.go:580]     Audit-Id: 055832f8-1e45-4830-8e2f-3942f90d38d2
	I1109 10:31:49.058436   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9wsxp","generateName":"kube-proxy-","namespace":"kube-system","uid":"03c6822b-9fef-4fa3-82a3-bb5082cf31b3","resourceVersion":"1023","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I1109 10:31:49.058660   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:49.058666   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.058672   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.058678   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.060203   29322 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I1109 10:31:49.060211   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.060216   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.060220   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.060226   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.060230   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.060236   29322 round_trippers.go:580]     Audit-Id: 040b3107-c5e1-4000-91fe-dd3f869b3cad
	I1109 10:31:49.060240   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.060270   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:49.060433   29322 pod_ready.go:92] pod "kube-proxy-9wsxp" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.060439   29322 pod_ready.go:81] duration metric: took 3.883433ms waiting for pod "kube-proxy-9wsxp" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.060444   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c4lh6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.237831   29322 request.go:614] Waited for 177.283934ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-c4lh6
	I1109 10:31:49.237880   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-c4lh6
	I1109 10:31:49.237888   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.237900   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.237911   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.241885   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:49.241902   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.241911   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.241920   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.241930   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.241938   29322 round_trippers.go:580]     Audit-Id: c00fee46-7caa-4ebc-8618-e170b21456bb
	I1109 10:31:49.241947   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.241955   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.242016   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4lh6","generateName":"kube-proxy-","namespace":"kube-system","uid":"e9055586-6022-464a-acdd-6fce3c87392b","resourceVersion":"845","creationTimestamp":"2022-11-09T18:26:28Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1109 10:31:49.438467   29322 request.go:614] Waited for 196.086951ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m02
	I1109 10:31:49.438566   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m02
	I1109 10:31:49.438578   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.438605   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.438617   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.443217   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:49.443228   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.443234   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.443239   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.443243   29322 round_trippers.go:580]     Audit-Id: 1eb58aad-1026-4caa-a770-3d00568a3c5d
	I1109 10:31:49.443248   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.443253   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.443258   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.443315   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528-m02","uid":"e1542fe1-dc88-406c-b080-a5120e5abea2","resourceVersion":"857","creationTimestamp":"2022-11-09T18:29:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:29:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:29:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4536 chars]
	I1109 10:31:49.443488   29322 pod_ready.go:92] pod "kube-proxy-c4lh6" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:49.443494   29322 pod_ready.go:81] duration metric: took 383.055188ms waiting for pod "kube-proxy-c4lh6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.443501   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kh6r6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:49.637295   29322 request.go:614] Waited for 193.726412ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-kh6r6
	I1109 10:31:49.637350   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-kh6r6
	I1109 10:31:49.637358   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.637370   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.637380   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.641289   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:49.641304   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.641312   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.641318   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.641325   29322 round_trippers.go:580]     Audit-Id: 5d59c0a2-04e2-4809-921e-06e44e8d71a5
	I1109 10:31:49.641331   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.641337   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.641343   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.641585   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kh6r6","generateName":"kube-proxy-","namespace":"kube-system","uid":"de2bad4b-35b4-4537-a6a3-7acd77c63e69","resourceVersion":"925","creationTimestamp":"2022-11-09T18:27:09Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:27:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1109 10:31:49.837062   29322 request.go:614] Waited for 195.160582ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m03
	I1109 10:31:49.837127   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m03
	I1109 10:31:49.837135   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:49.837144   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:49.837151   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:49.839993   29322 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I1109 10:31:49.840003   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:49.840010   29322 round_trippers.go:580]     Audit-Id: d772f876-f389-4d2b-bb46-c37a8e0fe4e7
	I1109 10:31:49.840015   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:49.840020   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:49.840025   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:49.840030   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:49.840034   29322 round_trippers.go:580]     Content-Length: 210
	I1109 10:31:49.840039   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:49 GMT
	I1109 10:31:49.840051   29322 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-102528-m03\" not found","reason":"NotFound","details":{"name":"multinode-102528-m03","kind":"nodes"},"code":404}
	I1109 10:31:49.840162   29322 pod_ready.go:97] node "multinode-102528-m03" hosting pod "kube-proxy-kh6r6" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-102528-m03": nodes "multinode-102528-m03" not found
	I1109 10:31:49.840169   29322 pod_ready.go:81] duration metric: took 396.674176ms waiting for pod "kube-proxy-kh6r6" in "kube-system" namespace to be "Ready" ...
	E1109 10:31:49.840174   29322 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-102528-m03" hosting pod "kube-proxy-kh6r6" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-102528-m03": nodes "multinode-102528-m03" not found
	I1109 10:31:49.840179   29322 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:50.037017   29322 request.go:614] Waited for 196.806176ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102528
	I1109 10:31:50.037072   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102528
	I1109 10:31:50.037080   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.037093   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.037134   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.040503   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:50.040516   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.040524   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.040530   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.040538   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.040544   29322 round_trippers.go:580]     Audit-Id: d2ee6724-57c2-4f45-8850-e4c5e803441a
	I1109 10:31:50.040551   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.040557   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.040638   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-102528","namespace":"kube-system","uid":"26dff845-4103-4884-86e3-42c37dc577c0","resourceVersion":"1014","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9865f6bce1997a307196ce89b4764fd5","kubernetes.io/config.mirror":"9865f6bce1997a307196ce89b4764fd5","kubernetes.io/config.seen":"2022-11-09T18:25:54.343402489Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1109 10:31:50.238512   29322 request.go:614] Waited for 197.480323ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:50.238561   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:50.238570   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.238582   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.238595   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.242460   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:50.242477   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.242484   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.242512   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.242525   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.242533   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.242539   29322 round_trippers.go:580]     Audit-Id: 667cbb3f-106d-4852-8c27-ba9970339ab6
	I1109 10:31:50.242545   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.242620   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:50.242872   29322 pod_ready.go:92] pod "kube-scheduler-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:50.242882   29322 pod_ready.go:81] duration metric: took 402.707749ms waiting for pod "kube-scheduler-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:50.242892   29322 pod_ready.go:38] duration metric: took 39.719867134s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 10:31:50.242910   29322 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 10:31:50.250698   29322 command_runner.go:130] > -16
	I1109 10:31:50.250837   29322 ops.go:34] apiserver oom_adj: -16
	I1109 10:31:50.250846   29322 kubeadm.go:631] restartCluster took 56.8779672s
	I1109 10:31:50.250852   29322 kubeadm.go:398] StartCluster complete in 56.907883067s
	I1109 10:31:50.250864   29322 settings.go:142] acquiring lock: {Name:mke93232301b59b22d43a378e933baa222d3feda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:31:50.250958   29322 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:31:50.251326   29322 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/kubeconfig: {Name:mk02bb1c68cad934afd737965b2dbda8f5a4ba2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:31:50.251925   29322 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:31:50.252087   29322 kapi.go:59] client config for multinode-102528: &rest.Config{Host:"https://127.0.0.1:62610", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:31:50.252296   29322 round_trippers.go:463] GET https://127.0.0.1:62610/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I1109 10:31:50.252301   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.252309   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.252314   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.254498   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:50.254507   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.254512   29322 round_trippers.go:580]     Content-Length: 292
	I1109 10:31:50.254517   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.254523   29322 round_trippers.go:580]     Audit-Id: 4630ac4e-ce5d-49f4-8d66-e1fe6e225e49
	I1109 10:31:50.254527   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.254532   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.254537   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.254542   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.254552   29322 request.go:1154] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"2c5e384a-cc55-41eb-8931-c2c8d631848e","resourceVersion":"1077","creationTimestamp":"2022-11-09T18:25:54Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I1109 10:31:50.254628   29322 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "multinode-102528" rescaled to 1
	I1109 10:31:50.254658   29322 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1109 10:31:50.254678   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 10:31:50.276859   29322 out.go:177] * Verifying Kubernetes components...
	I1109 10:31:50.254697   29322 addons.go:486] enableAddons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false], additional=[]
	I1109 10:31:50.254837   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:31:50.317761   29322 addons.go:65] Setting storage-provisioner=true in profile "multinode-102528"
	I1109 10:31:50.317761   29322 addons.go:65] Setting default-storageclass=true in profile "multinode-102528"
	I1109 10:31:50.317773   29322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 10:31:50.317788   29322 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-102528"
	I1109 10:31:50.317789   29322 addons.go:227] Setting addon storage-provisioner=true in "multinode-102528"
	W1109 10:31:50.317798   29322 addons.go:236] addon storage-provisioner should already be in state true
	I1109 10:31:50.317848   29322 host.go:66] Checking if "multinode-102528" exists ...
	I1109 10:31:50.318099   29322 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:31:50.318198   29322 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:31:50.339067   29322 command_runner.go:130] > apiVersion: v1
	I1109 10:31:50.339087   29322 command_runner.go:130] > data:
	I1109 10:31:50.339092   29322 command_runner.go:130] >   Corefile: |
	I1109 10:31:50.339098   29322 command_runner.go:130] >     .:53 {
	I1109 10:31:50.339103   29322 command_runner.go:130] >         errors
	I1109 10:31:50.339107   29322 command_runner.go:130] >         health {
	I1109 10:31:50.339112   29322 command_runner.go:130] >            lameduck 5s
	I1109 10:31:50.339125   29322 command_runner.go:130] >         }
	I1109 10:31:50.339132   29322 command_runner.go:130] >         ready
	I1109 10:31:50.339140   29322 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I1109 10:31:50.339144   29322 command_runner.go:130] >            pods insecure
	I1109 10:31:50.339148   29322 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I1109 10:31:50.339153   29322 command_runner.go:130] >            ttl 30
	I1109 10:31:50.339158   29322 command_runner.go:130] >         }
	I1109 10:31:50.339163   29322 command_runner.go:130] >         prometheus :9153
	I1109 10:31:50.339166   29322 command_runner.go:130] >         hosts {
	I1109 10:31:50.339171   29322 command_runner.go:130] >            192.168.65.2 host.minikube.internal
	I1109 10:31:50.339176   29322 command_runner.go:130] >            fallthrough
	I1109 10:31:50.339180   29322 command_runner.go:130] >         }
	I1109 10:31:50.339184   29322 command_runner.go:130] >         forward . /etc/resolv.conf {
	I1109 10:31:50.339190   29322 command_runner.go:130] >            max_concurrent 1000
	I1109 10:31:50.339194   29322 command_runner.go:130] >         }
	I1109 10:31:50.339198   29322 command_runner.go:130] >         cache 30
	I1109 10:31:50.339204   29322 command_runner.go:130] >         loop
	I1109 10:31:50.339212   29322 command_runner.go:130] >         reload
	I1109 10:31:50.339218   29322 command_runner.go:130] >         loadbalance
	I1109 10:31:50.339222   29322 command_runner.go:130] >     }
	I1109 10:31:50.339227   29322 command_runner.go:130] > kind: ConfigMap
	I1109 10:31:50.339233   29322 command_runner.go:130] > metadata:
	I1109 10:31:50.339239   29322 command_runner.go:130] >   creationTimestamp: "2022-11-09T18:25:54Z"
	I1109 10:31:50.339250   29322 command_runner.go:130] >   name: coredns
	I1109 10:31:50.339255   29322 command_runner.go:130] >   namespace: kube-system
	I1109 10:31:50.339258   29322 command_runner.go:130] >   resourceVersion: "359"
	I1109 10:31:50.339269   29322 command_runner.go:130] >   uid: a7e5939f-cbe7-4fa9-af8b-b4745b0c1a3a
	I1109 10:31:50.343797   29322 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1109 10:31:50.344849   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:31:50.382978   29322 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:31:50.404388   29322 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 10:31:50.404666   29322 kapi.go:59] client config for multinode-102528: &rest.Config{Host:"https://127.0.0.1:62610", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:31:50.425511   29322 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 10:31:50.425532   29322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 10:31:50.425684   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:31:50.426357   29322 round_trippers.go:463] GET https://127.0.0.1:62610/apis/storage.k8s.io/v1/storageclasses
	I1109 10:31:50.426731   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.426777   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.426791   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.430902   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:50.430924   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.430938   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.430967   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.430977   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.430981   29322 round_trippers.go:580]     Content-Length: 1274
	I1109 10:31:50.430986   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.430991   29322 round_trippers.go:580]     Audit-Id: 23bd0941-1af6-4d6e-b506-0f4281edc2cf
	I1109 10:31:50.430995   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.431040   29322 request.go:1154] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"standard","uid":"f1f19485-8735-41b4-b256-141da52da440","resourceVersion":"373","creationTimestamp":"2022-11-09T18:26:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-09T18:26:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I1109 10:31:50.431494   29322 request.go:1154] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f1f19485-8735-41b4-b256-141da52da440","resourceVersion":"373","creationTimestamp":"2022-11-09T18:26:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-09T18:26:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1109 10:31:50.431547   29322 round_trippers.go:463] PUT https://127.0.0.1:62610/apis/storage.k8s.io/v1/storageclasses/standard
	I1109 10:31:50.431553   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.431559   29322 round_trippers.go:473]     Content-Type: application/json
	I1109 10:31:50.431565   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.431570   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.434768   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:50.434781   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.434786   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.434791   29322 round_trippers.go:580]     Audit-Id: 2db95456-0cb5-48d2-bea9-3a04a1a2756f
	I1109 10:31:50.434795   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.434800   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.434805   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.434814   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.434827   29322 round_trippers.go:580]     Content-Length: 1220
	I1109 10:31:50.434950   29322 request.go:1154] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"f1f19485-8735-41b4-b256-141da52da440","resourceVersion":"373","creationTimestamp":"2022-11-09T18:26:08Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2022-11-09T18:26:08Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I1109 10:31:50.435030   29322 addons.go:227] Setting addon default-storageclass=true in "multinode-102528"
	W1109 10:31:50.435038   29322 addons.go:236] addon default-storageclass should already be in state true
	I1109 10:31:50.435061   29322 host.go:66] Checking if "multinode-102528" exists ...
	I1109 10:31:50.435438   29322 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:31:50.436278   29322 node_ready.go:35] waiting up to 6m0s for node "multinode-102528" to be "Ready" ...
	I1109 10:31:50.436401   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:50.436411   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.436418   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.436423   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.439461   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:50.439476   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.439483   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.439487   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.439492   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.439496   29322 round_trippers.go:580]     Audit-Id: 6112e173-5909-4e2e-8dcb-b178751a3503
	I1109 10:31:50.439500   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.439504   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.439578   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:50.439839   29322 node_ready.go:49] node "multinode-102528" has status "Ready":"True"
	I1109 10:31:50.439846   29322 node_ready.go:38] duration metric: took 3.536044ms waiting for node "multinode-102528" to be "Ready" ...
	I1109 10:31:50.439858   29322 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 10:31:50.486912   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:31:50.493359   29322 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 10:31:50.493371   29322 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 10:31:50.493458   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:31:50.551229   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:31:50.577025   29322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 10:31:50.636473   29322 request.go:614] Waited for 196.560259ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:50.636515   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:50.636521   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.636530   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.636537   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.640815   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:50.640845   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.640859   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.640876   29322 round_trippers.go:580]     Audit-Id: 64877145-f638-4637-8d9a-e8d0b998b412
	I1109 10:31:50.640892   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.640897   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.640915   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.640921   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.641915   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1073","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85374 chars]
	I1109 10:31:50.643894   29322 pod_ready.go:78] waiting up to 6m0s for pod "coredns-565d847f94-fx6lt" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:50.645672   29322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 10:31:50.737781   29322 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I1109 10:31:50.739140   29322 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I1109 10:31:50.740667   29322 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1109 10:31:50.742361   29322 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I1109 10:31:50.743932   29322 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I1109 10:31:50.750008   29322 command_runner.go:130] > pod/storage-provisioner configured
	I1109 10:31:50.836743   29322 request.go:614] Waited for 192.804333ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:50.836804   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/coredns-565d847f94-fx6lt
	I1109 10:31:50.836810   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:50.836816   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:50.836823   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:50.839478   29322 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1109 10:31:50.839493   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:50.839499   29322 round_trippers.go:580]     Audit-Id: 8e3b6e75-41b6-41d2-96e2-af040440a726
	I1109 10:31:50.839507   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:50.839513   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:50.839518   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:50.839523   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:50.839528   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:50 GMT
	I1109 10:31:50.839608   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1073","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6553 chars]
	I1109 10:31:50.845820   29322 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I1109 10:31:50.875244   29322 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1109 10:31:50.897126   29322 addons.go:488] enableAddons completed in 642.446218ms
	I1109 10:31:51.036513   29322 request.go:614] Waited for 196.563664ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.036576   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.036593   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:51.036640   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:51.036653   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:51.040446   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:51.040462   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:51.040474   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:51 GMT
	I1109 10:31:51.040481   29322 round_trippers.go:580]     Audit-Id: 79a8b74f-f251-41b9-a718-a682821837ad
	I1109 10:31:51.040489   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:51.040503   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:51.040512   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:51.040521   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:51.040600   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:51.040878   29322 pod_ready.go:92] pod "coredns-565d847f94-fx6lt" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:51.040886   29322 pod_ready.go:81] duration metric: took 396.991895ms waiting for pod "coredns-565d847f94-fx6lt" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:51.040894   29322 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:51.238482   29322 request.go:614] Waited for 197.459325ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/etcd-multinode-102528
	I1109 10:31:51.238538   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/etcd-multinode-102528
	I1109 10:31:51.238548   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:51.238562   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:51.238572   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:51.242284   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:51.242300   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:51.242308   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:51 GMT
	I1109 10:31:51.242321   29322 round_trippers.go:580]     Audit-Id: 89518439-dcc9-46f6-b0dc-3c61e0d79185
	I1109 10:31:51.242328   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:51.242338   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:51.242344   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:51.242354   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:51.242575   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-102528","namespace":"kube-system","uid":"5dde8340-2916-4da6-91aa-ea6dfe24a5ad","resourceVersion":"1041","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.58.2:2379","kubernetes.io/config.hash":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.mirror":"58165e0d3ee72e9b0f054fadec557161","kubernetes.io/config.seen":"2022-11-09T18:25:54.343403314Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6046 chars]
	I1109 10:31:51.438416   29322 request.go:614] Waited for 195.470808ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.438533   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.438544   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:51.438556   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:51.438567   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:51.442319   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:51.442336   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:51.442344   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:51.442350   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:51.442356   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:51.442364   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:51.442370   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:51 GMT
	I1109 10:31:51.442377   29322 round_trippers.go:580]     Audit-Id: 7f83dae5-5ef3-43ed-ad86-30770d972412
	I1109 10:31:51.442476   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:51.442751   29322 pod_ready.go:92] pod "etcd-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:51.442769   29322 pod_ready.go:81] duration metric: took 401.864919ms waiting for pod "etcd-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:51.442798   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:51.636989   29322 request.go:614] Waited for 194.151085ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102528
	I1109 10:31:51.637089   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-102528
	I1109 10:31:51.637100   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:51.637119   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:51.637134   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:51.641170   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:51.641185   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:51.641193   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:51 GMT
	I1109 10:31:51.641199   29322 round_trippers.go:580]     Audit-Id: 22fde405-018a-4404-bf12-233659e0904c
	I1109 10:31:51.641206   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:51.641213   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:51.641219   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:51.641225   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:51.641306   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-102528","namespace":"kube-system","uid":"f48fa313-e8ec-42bc-87bc-7daede794fe2","resourceVersion":"1029","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.58.2:8443","kubernetes.io/config.hash":"b110864cd1ed66678c31ad09d14c41ec","kubernetes.io/config.mirror":"b110864cd1ed66678c31ad09d14c41ec","kubernetes.io/config.seen":"2022-11-09T18:25:54.343403906Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8428 chars]
	I1109 10:31:51.838408   29322 request.go:614] Waited for 196.706833ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.838485   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:51.838498   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:51.838511   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:51.838524   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:51.842240   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:51.842256   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:51.842273   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:51.842282   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:51.842293   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:51.842302   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:51 GMT
	I1109 10:31:51.842310   29322 round_trippers.go:580]     Audit-Id: 6e4fa976-5104-4a97-921b-d75a8bede7fb
	I1109 10:31:51.842317   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:51.842575   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:51.842853   29322 pod_ready.go:92] pod "kube-apiserver-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:51.842859   29322 pod_ready.go:81] duration metric: took 400.063646ms waiting for pod "kube-apiserver-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:51.842867   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:52.036495   29322 request.go:614] Waited for 193.584888ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102528
	I1109 10:31:52.036557   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-102528
	I1109 10:31:52.036601   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:52.036620   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:52.036633   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:52.039756   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:52.039772   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:52.039783   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:52.039794   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:52 GMT
	I1109 10:31:52.039801   29322 round_trippers.go:580]     Audit-Id: 1e8740bb-d312-4092-b67f-48cb6c686c8d
	I1109 10:31:52.039842   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:52.039850   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:52.039861   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:52.040044   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-102528","namespace":"kube-system","uid":"3dd056ba-22b5-4b0c-aa7e-9e00d215df9a","resourceVersion":"1035","creationTimestamp":"2022-11-09T18:25:52Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ec9e561364ffe02db1e38ab82ddc699b","kubernetes.io/config.mirror":"ec9e561364ffe02db1e38ab82ddc699b","kubernetes.io/config.seen":"2022-11-09T18:25:43.900701692Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8005 chars]
	I1109 10:31:52.236382   29322 request.go:614] Waited for 195.974896ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:52.236433   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:52.236451   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:52.236503   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:52.236516   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:52.240104   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:52.240122   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:52.240131   29322 round_trippers.go:580]     Audit-Id: 67b77a0a-a747-4e73-9dd6-0ad109cdc4f6
	I1109 10:31:52.240138   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:52.240145   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:52.240151   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:52.240158   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:52.240165   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:52 GMT
	I1109 10:31:52.240259   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:52.240526   29322 pod_ready.go:92] pod "kube-controller-manager-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:52.240534   29322 pod_ready.go:81] duration metric: took 397.6731ms waiting for pod "kube-controller-manager-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:52.240543   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9wsxp" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:52.438109   29322 request.go:614] Waited for 197.482373ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-9wsxp
	I1109 10:31:52.438175   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-9wsxp
	I1109 10:31:52.438185   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:52.438201   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:52.438211   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:52.441973   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:52.441985   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:52.441992   29322 round_trippers.go:580]     Audit-Id: ff85bade-92d9-4986-b3b2-3b21f5d86198
	I1109 10:31:52.442020   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:52.442028   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:52.442033   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:52.442038   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:52.442042   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:52 GMT
	I1109 10:31:52.442096   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9wsxp","generateName":"kube-proxy-","namespace":"kube-system","uid":"03c6822b-9fef-4fa3-82a3-bb5082cf31b3","resourceVersion":"1023","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5736 chars]
	I1109 10:31:52.637720   29322 request.go:614] Waited for 195.332813ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:52.637787   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:52.637803   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:52.637818   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:52.637829   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:52.641380   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:52.641395   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:52.641403   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:52.641410   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:52.641416   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:52.641431   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:52.641440   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:52 GMT
	I1109 10:31:52.641446   29322 round_trippers.go:580]     Audit-Id: ee6ef8c5-ad9a-47cf-aebb-fa9813d6e71d
	I1109 10:31:52.641686   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:52.641999   29322 pod_ready.go:92] pod "kube-proxy-9wsxp" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:52.642009   29322 pod_ready.go:81] duration metric: took 401.471929ms waiting for pod "kube-proxy-9wsxp" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:52.642021   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c4lh6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:52.837373   29322 request.go:614] Waited for 195.279986ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-c4lh6
	I1109 10:31:52.837462   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-c4lh6
	I1109 10:31:52.837474   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:52.837488   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:52.837499   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:52.841070   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:52.841085   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:52.841092   29322 round_trippers.go:580]     Audit-Id: f57c670c-d744-434a-903b-c333dc5033cf
	I1109 10:31:52.841099   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:52.841105   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:52.841111   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:52.841119   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:52.841125   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:52 GMT
	I1109 10:31:52.841683   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c4lh6","generateName":"kube-proxy-","namespace":"kube-system","uid":"e9055586-6022-464a-acdd-6fce3c87392b","resourceVersion":"845","creationTimestamp":"2022-11-09T18:26:28Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:28Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5741 chars]
	I1109 10:31:53.036399   29322 request.go:614] Waited for 194.425284ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m02
	I1109 10:31:53.036461   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m02
	I1109 10:31:53.036560   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.036577   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.036599   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.039879   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:53.039896   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.039902   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.039907   29322 round_trippers.go:580]     Audit-Id: 75eb063e-866b-4289-927e-2805251d8167
	I1109 10:31:53.039912   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.039920   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.039925   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.039930   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.039986   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528-m02","uid":"e1542fe1-dc88-406c-b080-a5120e5abea2","resourceVersion":"857","creationTimestamp":"2022-11-09T18:29:33Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:29:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:29:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.ku
bernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f [truncated 4536 chars]
	I1109 10:31:53.040183   29322 pod_ready.go:92] pod "kube-proxy-c4lh6" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:53.040189   29322 pod_ready.go:81] duration metric: took 398.156397ms waiting for pod "kube-proxy-c4lh6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:53.040196   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kh6r6" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:53.236349   29322 request.go:614] Waited for 196.117365ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-kh6r6
	I1109 10:31:53.236453   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-proxy-kh6r6
	I1109 10:31:53.236465   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.236477   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.236487   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.240993   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:53.241006   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.241012   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.241017   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.241022   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.241027   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.241032   29322 round_trippers.go:580]     Audit-Id: c451d501-c710-4a1c-82a5-e751549aa3c4
	I1109 10:31:53.241037   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.241109   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kh6r6","generateName":"kube-proxy-","namespace":"kube-system","uid":"de2bad4b-35b4-4537-a6a3-7acd77c63e69","resourceVersion":"925","creationTimestamp":"2022-11-09T18:27:09Z","labels":{"controller-revision-hash":"b9c5d5dc4","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"bf8e9b6c-a049-46db-b636-548666fd5424","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:27:09Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"bf8e9b6c-a049-46db-b636-548666fd5424\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:re
quiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k:{ [truncated 5743 chars]
	I1109 10:31:53.436516   29322 request.go:614] Waited for 195.139966ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m03
	I1109 10:31:53.436623   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m03
	I1109 10:31:53.436634   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.436646   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.436659   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.440442   29322 round_trippers.go:574] Response Status: 404 Not Found in 3 milliseconds
	I1109 10:31:53.440460   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.440468   29322 round_trippers.go:580]     Content-Length: 210
	I1109 10:31:53.440475   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.440481   29322 round_trippers.go:580]     Audit-Id: 13e74584-6407-493a-b6d5-7dbb73a4224a
	I1109 10:31:53.440487   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.440493   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.440500   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.440506   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.440523   29322 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-102528-m03\" not found","reason":"NotFound","details":{"name":"multinode-102528-m03","kind":"nodes"},"code":404}
	I1109 10:31:53.440589   29322 pod_ready.go:97] node "multinode-102528-m03" hosting pod "kube-proxy-kh6r6" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-102528-m03": nodes "multinode-102528-m03" not found
	I1109 10:31:53.440598   29322 pod_ready.go:81] duration metric: took 400.408173ms waiting for pod "kube-proxy-kh6r6" in "kube-system" namespace to be "Ready" ...
	E1109 10:31:53.440606   29322 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-102528-m03" hosting pod "kube-proxy-kh6r6" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-102528-m03": nodes "multinode-102528-m03" not found
	I1109 10:31:53.440612   29322 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:53.638340   29322 request.go:614] Waited for 197.67651ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102528
	I1109 10:31:53.638432   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-102528
	I1109 10:31:53.638466   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.638480   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.638494   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.643176   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:53.643190   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.643196   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.643201   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.643206   29322 round_trippers.go:580]     Audit-Id: e269443f-0ae7-44bf-9779-f4b86773c058
	I1109 10:31:53.643211   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.643215   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.643220   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.643270   29322 request.go:1154] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-102528","namespace":"kube-system","uid":"26dff845-4103-4884-86e3-42c37dc577c0","resourceVersion":"1014","creationTimestamp":"2022-11-09T18:25:54Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"9865f6bce1997a307196ce89b4764fd5","kubernetes.io/config.mirror":"9865f6bce1997a307196ce89b4764fd5","kubernetes.io/config.seen":"2022-11-09T18:25:54.343402489Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:25:54Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 4887 chars]
	I1109 10:31:53.838352   29322 request.go:614] Waited for 194.794574ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:53.838482   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes/multinode-102528
	I1109 10:31:53.838493   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.838504   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.838515   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.842400   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:53.842416   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.842423   29322 round_trippers.go:580]     Audit-Id: 4a52291a-71e6-45b4-b24d-e8723157a7af
	I1109 10:31:53.842430   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.842436   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.842441   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.842447   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.842453   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.842518   29322 request.go:1154] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2022-11-09T18:25:51Z","fieldsType":"FieldsV1","fi [truncated 5321 chars]
	I1109 10:31:53.842785   29322 pod_ready.go:92] pod "kube-scheduler-multinode-102528" in "kube-system" namespace has status "Ready":"True"
	I1109 10:31:53.842793   29322 pod_ready.go:81] duration metric: took 402.18488ms waiting for pod "kube-scheduler-multinode-102528" in "kube-system" namespace to be "Ready" ...
	I1109 10:31:53.842802   29322 pod_ready.go:38] duration metric: took 3.403025986s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 10:31:53.842821   29322 api_server.go:51] waiting for apiserver process to appear ...
	I1109 10:31:53.842902   29322 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:31:53.852004   29322 command_runner.go:130] > 1777
	I1109 10:31:53.852713   29322 api_server.go:71] duration metric: took 3.598134656s to wait for apiserver process to appear ...
	I1109 10:31:53.852722   29322 api_server.go:87] waiting for apiserver healthz status ...
	I1109 10:31:53.852730   29322 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62610/healthz ...
	I1109 10:31:53.857837   29322 api_server.go:278] https://127.0.0.1:62610/healthz returned 200:
	ok
	I1109 10:31:53.857869   29322 round_trippers.go:463] GET https://127.0.0.1:62610/version
	I1109 10:31:53.857874   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:53.857881   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:53.857887   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:53.858850   29322 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1109 10:31:53.858859   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:53.858865   29322 round_trippers.go:580]     Content-Length: 263
	I1109 10:31:53.858870   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:53 GMT
	I1109 10:31:53.858875   29322 round_trippers.go:580]     Audit-Id: 9195da95-e731-482a-bd29-3de3e97404a6
	I1109 10:31:53.858880   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:53.858885   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:53.858890   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:53.858895   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:53.858904   29322 request.go:1154] Response Body: {
	  "major": "1",
	  "minor": "25",
	  "gitVersion": "v1.25.3",
	  "gitCommit": "434bfd82814af038ad94d62ebe59b133fcb50506",
	  "gitTreeState": "clean",
	  "buildDate": "2022-10-12T10:49:09Z",
	  "goVersion": "go1.19.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I1109 10:31:53.858925   29322 api_server.go:140] control plane version: v1.25.3
	I1109 10:31:53.858931   29322 api_server.go:130] duration metric: took 6.204895ms to wait for apiserver health ...
	I1109 10:31:53.858935   29322 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 10:31:54.036735   29322 request.go:614] Waited for 177.758991ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:54.036842   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:54.036854   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:54.036866   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:54.036876   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:54.043136   29322 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1109 10:31:54.043149   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:54.043155   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:54.043159   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:54.043164   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:54 GMT
	I1109 10:31:54.043168   29322 round_trippers.go:580]     Audit-Id: ca34093d-d9e3-43c3-bd98-3a95ddf67286
	I1109 10:31:54.043176   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:54.043185   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:54.044715   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1073","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85374 chars]
	I1109 10:31:54.046658   29322 system_pods.go:59] 12 kube-system pods found
	I1109 10:31:54.046668   29322 system_pods.go:61] "coredns-565d847f94-fx6lt" [680c8c15-39e0-4143-8dfd-30727e628800] Running
	I1109 10:31:54.046672   29322 system_pods.go:61] "etcd-multinode-102528" [5dde8340-2916-4da6-91aa-ea6dfe24a5ad] Running
	I1109 10:31:54.046677   29322 system_pods.go:61] "kindnet-6kjz8" [b34e8f27-542c-40de-80a7-cf1226429128] Running
	I1109 10:31:54.046680   29322 system_pods.go:61] "kindnet-9td8m" [bb563027-b991-4b95-921a-ee4687934118] Running
	I1109 10:31:54.046684   29322 system_pods.go:61] "kindnet-z66sn" [03cc3962-c1e0-444a-8743-743e707bf96d] Running
	I1109 10:31:54.046687   29322 system_pods.go:61] "kube-apiserver-multinode-102528" [f48fa313-e8ec-42bc-87bc-7daede794fe2] Running
	I1109 10:31:54.046692   29322 system_pods.go:61] "kube-controller-manager-multinode-102528" [3dd056ba-22b5-4b0c-aa7e-9e00d215df9a] Running
	I1109 10:31:54.046697   29322 system_pods.go:61] "kube-proxy-9wsxp" [03c6822b-9fef-4fa3-82a3-bb5082cf31b3] Running
	I1109 10:31:54.046701   29322 system_pods.go:61] "kube-proxy-c4lh6" [e9055586-6022-464a-acdd-6fce3c87392b] Running
	I1109 10:31:54.046705   29322 system_pods.go:61] "kube-proxy-kh6r6" [de2bad4b-35b4-4537-a6a3-7acd77c63e69] Running
	I1109 10:31:54.046709   29322 system_pods.go:61] "kube-scheduler-multinode-102528" [26dff845-4103-4884-86e3-42c37dc577c0] Running
	I1109 10:31:54.046727   29322 system_pods.go:61] "storage-provisioner" [5c5e247e-06db-434c-af4a-91a2c2a08779] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 10:31:54.046734   29322 system_pods.go:74] duration metric: took 187.799545ms to wait for pod list to return data ...
	I1109 10:31:54.046740   29322 default_sa.go:34] waiting for default service account to be created ...
	I1109 10:31:54.238332   29322 request.go:614] Waited for 191.522629ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/default/serviceaccounts
	I1109 10:31:54.238423   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/default/serviceaccounts
	I1109 10:31:54.238435   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:54.238448   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:54.238485   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:54.242261   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:31:54.242280   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:54.242287   29322 round_trippers.go:580]     Content-Length: 262
	I1109 10:31:54.242294   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:54 GMT
	I1109 10:31:54.242302   29322 round_trippers.go:580]     Audit-Id: bf1a0c7d-e17c-441d-92a8-ad49bd35de7f
	I1109 10:31:54.242308   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:54.242315   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:54.242322   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:54.242328   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:54.242344   29322 request.go:1154] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"1c004ba2-6981-48fa-895c-1cc8e56c3bb4","resourceVersion":"312","creationTimestamp":"2022-11-09T18:26:07Z"}}]}
	I1109 10:31:54.242504   29322 default_sa.go:45] found service account: "default"
	I1109 10:31:54.242513   29322 default_sa.go:55] duration metric: took 195.773608ms for default service account to be created ...
	I1109 10:31:54.242522   29322 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 10:31:54.437237   29322 request.go:614] Waited for 194.674489ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:54.437300   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/namespaces/kube-system/pods
	I1109 10:31:54.437311   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:54.437354   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:54.437369   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:54.442178   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:54.442193   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:54.442201   29322 round_trippers.go:580]     Audit-Id: a220ae7e-c910-4cc4-963f-ced211210750
	I1109 10:31:54.442209   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:54.442215   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:54.442219   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:54.442224   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:54.442230   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:54 GMT
	I1109 10:31:54.443399   29322 request.go:1154] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"coredns-565d847f94-fx6lt","generateName":"coredns-565d847f94-","namespace":"kube-system","uid":"680c8c15-39e0-4143-8dfd-30727e628800","resourceVersion":"1073","creationTimestamp":"2022-11-09T18:26:07Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"565d847f94"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-565d847f94","uid":"b1124b6b-6d50-46ef-950e-d15318782bf8","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2022-11-09T18:26:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b1124b6b-6d50-46ef-950e-d15318782bf8\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85374 chars]
	I1109 10:31:54.445340   29322 system_pods.go:86] 12 kube-system pods found
	I1109 10:31:54.445350   29322 system_pods.go:89] "coredns-565d847f94-fx6lt" [680c8c15-39e0-4143-8dfd-30727e628800] Running
	I1109 10:31:54.445355   29322 system_pods.go:89] "etcd-multinode-102528" [5dde8340-2916-4da6-91aa-ea6dfe24a5ad] Running
	I1109 10:31:54.445360   29322 system_pods.go:89] "kindnet-6kjz8" [b34e8f27-542c-40de-80a7-cf1226429128] Running
	I1109 10:31:54.445365   29322 system_pods.go:89] "kindnet-9td8m" [bb563027-b991-4b95-921a-ee4687934118] Running
	I1109 10:31:54.445370   29322 system_pods.go:89] "kindnet-z66sn" [03cc3962-c1e0-444a-8743-743e707bf96d] Running
	I1109 10:31:54.445374   29322 system_pods.go:89] "kube-apiserver-multinode-102528" [f48fa313-e8ec-42bc-87bc-7daede794fe2] Running
	I1109 10:31:54.445380   29322 system_pods.go:89] "kube-controller-manager-multinode-102528" [3dd056ba-22b5-4b0c-aa7e-9e00d215df9a] Running
	I1109 10:31:54.445384   29322 system_pods.go:89] "kube-proxy-9wsxp" [03c6822b-9fef-4fa3-82a3-bb5082cf31b3] Running
	I1109 10:31:54.445390   29322 system_pods.go:89] "kube-proxy-c4lh6" [e9055586-6022-464a-acdd-6fce3c87392b] Running
	I1109 10:31:54.445394   29322 system_pods.go:89] "kube-proxy-kh6r6" [de2bad4b-35b4-4537-a6a3-7acd77c63e69] Running
	I1109 10:31:54.445398   29322 system_pods.go:89] "kube-scheduler-multinode-102528" [26dff845-4103-4884-86e3-42c37dc577c0] Running
	I1109 10:31:54.445404   29322 system_pods.go:89] "storage-provisioner" [5c5e247e-06db-434c-af4a-91a2c2a08779] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1109 10:31:54.445409   29322 system_pods.go:126] duration metric: took 202.887433ms to wait for k8s-apps to be running ...
	I1109 10:31:54.445414   29322 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 10:31:54.445474   29322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 10:31:54.455206   29322 system_svc.go:56] duration metric: took 9.78964ms WaitForService to wait for kubelet.
	I1109 10:31:54.455218   29322 kubeadm.go:573] duration metric: took 4.200657014s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1109 10:31:54.455232   29322 node_conditions.go:102] verifying NodePressure condition ...
	I1109 10:31:54.638318   29322 request.go:614] Waited for 183.037703ms due to client-side throttling, not priority and fairness, request: GET:https://127.0.0.1:62610/api/v1/nodes
	I1109 10:31:54.638443   29322 round_trippers.go:463] GET https://127.0.0.1:62610/api/v1/nodes
	I1109 10:31:54.638453   29322 round_trippers.go:469] Request Headers:
	I1109 10:31:54.638465   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:31:54.638475   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:31:54.642519   29322 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1109 10:31:54.642535   29322 round_trippers.go:577] Response Headers:
	I1109 10:31:54.642543   29322 round_trippers.go:580]     Audit-Id: 38c87ca6-24bd-4a35-bb99-82b11facd25b
	I1109 10:31:54.642556   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:31:54.642566   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:31:54.642572   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:31:54.642579   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:31:54.642585   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:31:54 GMT
	I1109 10:31:54.642682   29322 request.go:1154] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1079"},"items":[{"metadata":{"name":"multinode-102528","uid":"1f1e3cd7-23db-4f3c-9aee-a4211bc567bf","resourceVersion":"974","creationTimestamp":"2022-11-09T18:25:51Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-102528","kubernetes.io/os":"linux","minikube.k8s.io/commit":"b216797ebc629f5d4ea32d96a0fffe1acee1fa4c","minikube.k8s.io/name":"multinode-102528","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2022_11_09T10_25_55_0700","minikube.k8s.io/version":"v1.28.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFie
lds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time [truncated 10903 chars]
	I1109 10:31:54.643044   29322 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:31:54.643052   29322 node_conditions.go:123] node cpu capacity is 6
	I1109 10:31:54.643059   29322 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:31:54.643062   29322 node_conditions.go:123] node cpu capacity is 6
	I1109 10:31:54.643066   29322 node_conditions.go:105] duration metric: took 187.835565ms to run NodePressure ...
	I1109 10:31:54.643074   29322 start.go:217] waiting for startup goroutines ...
	I1109 10:31:54.643565   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:31:54.643635   29322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/config.json ...
	I1109 10:31:54.721001   29322 out.go:177] * Starting worker node multinode-102528-m02 in cluster multinode-102528
	I1109 10:31:54.742800   29322 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 10:31:54.764850   29322 out.go:177] * Pulling base image ...
	I1109 10:31:54.807997   29322 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 10:31:54.808009   29322 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 10:31:54.808031   29322 cache.go:57] Caching tarball of preloaded images
	I1109 10:31:54.808224   29322 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 10:31:54.808245   29322 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1109 10:31:54.809047   29322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/config.json ...
	I1109 10:31:54.865581   29322 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 10:31:54.865596   29322 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 10:31:54.865607   29322 cache.go:208] Successfully downloaded all kic artifacts
	I1109 10:31:54.865649   29322 start.go:364] acquiring machines lock for multinode-102528-m02: {Name:mka0ddf96880a56e449afe60431280267c5ed209 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 10:31:54.865737   29322 start.go:368] acquired machines lock for "multinode-102528-m02" in 75.463µs
	I1109 10:31:54.865758   29322 start.go:96] Skipping create...Using existing machine configuration
	I1109 10:31:54.865764   29322 fix.go:55] fixHost starting: m02
	I1109 10:31:54.866043   29322 cli_runner.go:164] Run: docker container inspect multinode-102528-m02 --format={{.State.Status}}
	I1109 10:31:54.921930   29322 fix.go:103] recreateIfNeeded on multinode-102528-m02: state=Stopped err=<nil>
	W1109 10:31:54.921962   29322 fix.go:129] unexpected machine state, will restart: <nil>
	I1109 10:31:54.943659   29322 out.go:177] * Restarting existing docker container for "multinode-102528-m02" ...
	I1109 10:31:54.985920   29322 cli_runner.go:164] Run: docker start multinode-102528-m02
	I1109 10:31:55.316320   29322 cli_runner.go:164] Run: docker container inspect multinode-102528-m02 --format={{.State.Status}}
	I1109 10:31:55.375466   29322 kic.go:415] container "multinode-102528-m02" state is running.
	I1109 10:31:55.376050   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528-m02
	I1109 10:31:55.437402   29322 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/config.json ...
	I1109 10:31:55.437859   29322 machine.go:88] provisioning docker machine ...
	I1109 10:31:55.437875   29322 ubuntu.go:169] provisioning hostname "multinode-102528-m02"
	I1109 10:31:55.437963   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:55.499733   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:31:55.499915   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62641 <nil> <nil>}
	I1109 10:31:55.499924   29322 main.go:134] libmachine: About to run SSH command:
	sudo hostname multinode-102528-m02 && echo "multinode-102528-m02" | sudo tee /etc/hostname
	I1109 10:31:55.666249   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: multinode-102528-m02
	
	I1109 10:31:55.666361   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:55.724134   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:31:55.724308   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62641 <nil> <nil>}
	I1109 10:31:55.724320   29322 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-102528-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-102528-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-102528-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 10:31:55.840690   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:31:55.840708   29322 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 10:31:55.840721   29322 ubuntu.go:177] setting up certificates
	I1109 10:31:55.840728   29322 provision.go:83] configureAuth start
	I1109 10:31:55.840821   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528-m02
	I1109 10:31:55.900864   29322 provision.go:138] copyHostCerts
	I1109 10:31:55.900912   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:31:55.900977   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 10:31:55.900983   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:31:55.901078   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 10:31:55.901279   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:31:55.901322   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 10:31:55.901327   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:31:55.901402   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 10:31:55.901525   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:31:55.901566   29322 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 10:31:55.901571   29322 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:31:55.901634   29322 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 10:31:55.901765   29322 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.multinode-102528-m02 san=[192.168.58.3 127.0.0.1 localhost 127.0.0.1 minikube multinode-102528-m02]
	I1109 10:31:56.009229   29322 provision.go:172] copyRemoteCerts
	I1109 10:31:56.009294   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 10:31:56.009364   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:56.070255   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62641 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:31:56.182046   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1109 10:31:56.182154   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 10:31:56.204999   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1109 10:31:56.205087   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I1109 10:31:56.221873   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1109 10:31:56.221957   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 10:31:56.241348   29322 provision.go:86] duration metric: configureAuth took 400.61816ms
	I1109 10:31:56.241361   29322 ubuntu.go:193] setting minikube options for container-runtime
	I1109 10:31:56.241565   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:31:56.241649   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:56.299022   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:31:56.299190   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62641 <nil> <nil>}
	I1109 10:31:56.299203   29322 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 10:31:56.415521   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 10:31:56.415532   29322 ubuntu.go:71] root file system type: overlay
	I1109 10:31:56.415705   29322 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 10:31:56.415795   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:56.476931   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:31:56.477098   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62641 <nil> <nil>}
	I1109 10:31:56.477157   29322 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.168.58.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 10:31:56.604577   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.168.58.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 10:31:56.604697   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:56.661599   29322 main.go:134] libmachine: Using SSH client type: native
	I1109 10:31:56.661759   29322 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 62641 <nil> <nil>}
	I1109 10:31:56.661772   29322 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 10:31:56.781229   29322 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:31:56.781244   29322 machine.go:91] provisioned docker machine in 1.343412808s
	I1109 10:31:56.781252   29322 start.go:300] post-start starting for "multinode-102528-m02" (driver="docker")
	I1109 10:31:56.781257   29322 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 10:31:56.781337   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 10:31:56.781405   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:56.839805   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62641 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:31:56.926223   29322 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 10:31:56.929787   29322 command_runner.go:130] > NAME="Ubuntu"
	I1109 10:31:56.929798   29322 command_runner.go:130] > VERSION="20.04.5 LTS (Focal Fossa)"
	I1109 10:31:56.929802   29322 command_runner.go:130] > ID=ubuntu
	I1109 10:31:56.929808   29322 command_runner.go:130] > ID_LIKE=debian
	I1109 10:31:56.929814   29322 command_runner.go:130] > PRETTY_NAME="Ubuntu 20.04.5 LTS"
	I1109 10:31:56.929819   29322 command_runner.go:130] > VERSION_ID="20.04"
	I1109 10:31:56.929826   29322 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I1109 10:31:56.929831   29322 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I1109 10:31:56.929835   29322 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I1109 10:31:56.929847   29322 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I1109 10:31:56.929852   29322 command_runner.go:130] > VERSION_CODENAME=focal
	I1109 10:31:56.929856   29322 command_runner.go:130] > UBUNTU_CODENAME=focal
	I1109 10:31:56.929913   29322 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 10:31:56.929926   29322 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 10:31:56.929933   29322 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 10:31:56.929944   29322 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 10:31:56.929951   29322 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 10:31:56.930051   29322 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 10:31:56.930235   29322 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 10:31:56.930241   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /etc/ssl/certs/228682.pem
	I1109 10:31:56.930460   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 10:31:56.937868   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:31:56.954936   29322 start.go:303] post-start completed in 173.679621ms
	I1109 10:31:56.955023   29322 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 10:31:56.955087   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:57.012790   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62641 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:31:57.093835   29322 command_runner.go:130] > 6%!
	(MISSING)I1109 10:31:57.093921   29322 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 10:31:57.098212   29322 command_runner.go:130] > 99G
	I1109 10:31:57.098556   29322 fix.go:57] fixHost completed within 2.232848247s
	I1109 10:31:57.098568   29322 start.go:83] releasing machines lock for "multinode-102528-m02", held for 2.232882802s
	I1109 10:31:57.098659   29322 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528-m02
	I1109 10:31:57.175556   29322 out.go:177] * Found network options:
	I1109 10:31:57.197555   29322 out.go:177]   - NO_PROXY=192.168.58.2
	W1109 10:31:57.219434   29322 proxy.go:119] fail to check proxy env: Error ip not in block
	W1109 10:31:57.219524   29322 proxy.go:119] fail to check proxy env: Error ip not in block
	I1109 10:31:57.219744   29322 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 10:31:57.219745   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1109 10:31:57.219878   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:57.219880   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:31:57.280827   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62641 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:31:57.281500   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62641 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:31:57.454045   29322 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1109 10:31:57.454098   29322 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (234 bytes)
	I1109 10:31:57.467851   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:31:57.550777   29322 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1109 10:31:57.641854   29322 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 10:31:57.652490   29322 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1109 10:31:57.653618   29322 command_runner.go:130] > [Unit]
	I1109 10:31:57.653629   29322 command_runner.go:130] > Description=Docker Application Container Engine
	I1109 10:31:57.653635   29322 command_runner.go:130] > Documentation=https://docs.docker.com
	I1109 10:31:57.653641   29322 command_runner.go:130] > BindsTo=containerd.service
	I1109 10:31:57.653649   29322 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I1109 10:31:57.653680   29322 command_runner.go:130] > Wants=network-online.target
	I1109 10:31:57.653702   29322 command_runner.go:130] > Requires=docker.socket
	I1109 10:31:57.653707   29322 command_runner.go:130] > StartLimitBurst=3
	I1109 10:31:57.653711   29322 command_runner.go:130] > StartLimitIntervalSec=60
	I1109 10:31:57.653714   29322 command_runner.go:130] > [Service]
	I1109 10:31:57.653718   29322 command_runner.go:130] > Type=notify
	I1109 10:31:57.653722   29322 command_runner.go:130] > Restart=on-failure
	I1109 10:31:57.653725   29322 command_runner.go:130] > Environment=NO_PROXY=192.168.58.2
	I1109 10:31:57.653732   29322 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1109 10:31:57.653743   29322 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1109 10:31:57.653749   29322 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1109 10:31:57.653754   29322 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1109 10:31:57.653760   29322 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1109 10:31:57.653766   29322 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1109 10:31:57.653771   29322 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1109 10:31:57.653785   29322 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1109 10:31:57.653791   29322 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1109 10:31:57.653795   29322 command_runner.go:130] > ExecStart=
	I1109 10:31:57.653808   29322 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1109 10:31:57.653814   29322 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1109 10:31:57.653819   29322 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1109 10:31:57.653825   29322 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1109 10:31:57.653831   29322 command_runner.go:130] > LimitNOFILE=infinity
	I1109 10:31:57.653837   29322 command_runner.go:130] > LimitNPROC=infinity
	I1109 10:31:57.653840   29322 command_runner.go:130] > LimitCORE=infinity
	I1109 10:31:57.653847   29322 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1109 10:31:57.653852   29322 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1109 10:31:57.653855   29322 command_runner.go:130] > TasksMax=infinity
	I1109 10:31:57.653859   29322 command_runner.go:130] > TimeoutStartSec=0
	I1109 10:31:57.653865   29322 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1109 10:31:57.653868   29322 command_runner.go:130] > Delegate=yes
	I1109 10:31:57.653877   29322 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1109 10:31:57.653881   29322 command_runner.go:130] > KillMode=process
	I1109 10:31:57.653884   29322 command_runner.go:130] > [Install]
	I1109 10:31:57.653888   29322 command_runner.go:130] > WantedBy=multi-user.target
	I1109 10:31:57.654028   29322 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 10:31:57.654094   29322 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 10:31:57.664105   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 10:31:57.675998   29322 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1109 10:31:57.676009   29322 command_runner.go:130] > image-endpoint: unix:///var/run/cri-dockerd.sock
	I1109 10:31:57.676873   29322 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 10:31:57.746356   29322 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 10:31:57.820129   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:31:57.899433   29322 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 10:31:58.129538   29322 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 10:31:58.204483   29322 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:31:58.283728   29322 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1109 10:31:58.293459   29322 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 10:31:58.293546   29322 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 10:31:58.297330   29322 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1109 10:31:58.297346   29322 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1109 10:31:58.297358   29322 command_runner.go:130] > Device: 100036h/1048630d	Inode: 131         Links: 1
	I1109 10:31:58.297365   29322 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I1109 10:31:58.297376   29322 command_runner.go:130] > Access: 2022-11-09 18:31:57.625836678 +0000
	I1109 10:31:58.297381   29322 command_runner.go:130] > Modify: 2022-11-09 18:31:57.569836675 +0000
	I1109 10:31:58.297386   29322 command_runner.go:130] > Change: 2022-11-09 18:31:57.575836675 +0000
	I1109 10:31:58.297392   29322 command_runner.go:130] >  Birth: -
	I1109 10:31:58.297612   29322 start.go:472] Will wait 60s for crictl version
	I1109 10:31:58.297680   29322 ssh_runner.go:195] Run: sudo crictl version
	I1109 10:31:58.325345   29322 command_runner.go:130] > Version:  0.1.0
	I1109 10:31:58.325357   29322 command_runner.go:130] > RuntimeName:  docker
	I1109 10:31:58.325361   29322 command_runner.go:130] > RuntimeVersion:  20.10.20
	I1109 10:31:58.325365   29322 command_runner.go:130] > RuntimeApiVersion:  1.41.0
	I1109 10:31:58.327432   29322 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1109 10:31:58.327530   29322 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:31:58.353552   29322 command_runner.go:130] > 20.10.20
	I1109 10:31:58.355839   29322 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:31:58.381394   29322 command_runner.go:130] > 20.10.20
	I1109 10:31:58.425287   29322 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1109 10:31:58.447513   29322 out.go:177]   - env NO_PROXY=192.168.58.2
	I1109 10:31:58.468786   29322 cli_runner.go:164] Run: docker exec -t multinode-102528-m02 dig +short host.docker.internal
	I1109 10:31:58.590491   29322 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 10:31:58.590597   29322 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 10:31:58.594960   29322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:31:58.604810   29322 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528 for IP: 192.168.58.3
	I1109 10:31:58.604952   29322 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 10:31:58.605020   29322 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 10:31:58.605028   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1109 10:31:58.605053   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1109 10:31:58.605082   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1109 10:31:58.605104   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1109 10:31:58.605210   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 10:31:58.605262   29322 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 10:31:58.605274   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 10:31:58.605310   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 10:31:58.605353   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 10:31:58.605408   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 10:31:58.605493   29322 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:31:58.605533   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> /usr/share/ca-certificates/228682.pem
	I1109 10:31:58.605562   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:31:58.605584   29322 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem -> /usr/share/ca-certificates/22868.pem
	I1109 10:31:58.605912   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 10:31:58.623459   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 10:31:58.640694   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 10:31:58.658011   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 10:31:58.675170   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 10:31:58.691755   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 10:31:58.709019   29322 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 10:31:58.726119   29322 ssh_runner.go:195] Run: openssl version
	I1109 10:31:58.731233   29322 command_runner.go:130] > OpenSSL 1.1.1f  31 Mar 2020
	I1109 10:31:58.731602   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 10:31:58.739416   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 10:31:58.743288   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:31:58.743374   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:31:58.743433   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 10:31:58.748523   29322 command_runner.go:130] > 3ec20f2e
	I1109 10:31:58.748823   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 10:31:58.756013   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 10:31:58.764233   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:31:58.768192   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:31:58.768257   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:31:58.768331   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:31:58.773370   29322 command_runner.go:130] > b5213941
	I1109 10:31:58.773760   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 10:31:58.781192   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 10:31:58.789075   29322 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 10:31:58.792964   29322 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:31:58.793048   29322 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:31:58.793097   29322 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 10:31:58.798215   29322 command_runner.go:130] > 51391683
	I1109 10:31:58.798551   29322 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 10:31:58.806577   29322 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 10:31:58.870083   29322 command_runner.go:130] > systemd
	I1109 10:31:58.872749   29322 cni.go:95] Creating CNI manager for ""
	I1109 10:31:58.872761   29322 cni.go:156] 2 nodes found, recommending kindnet
	I1109 10:31:58.872776   29322 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 10:31:58.872789   29322 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.3 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-102528 NodeName:multinode-102528-m02 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.58.3 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 10:31:58.872877   29322 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.3
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "multinode-102528-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.3
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 10:31:58.872941   29322 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=multinode-102528-m02 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.58.3 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 10:31:58.873019   29322 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1109 10:31:58.880021   29322 command_runner.go:130] > kubeadm
	I1109 10:31:58.880031   29322 command_runner.go:130] > kubectl
	I1109 10:31:58.880037   29322 command_runner.go:130] > kubelet
	I1109 10:31:58.880789   29322 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 10:31:58.880846   29322 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I1109 10:31:58.887904   29322 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (482 bytes)
	I1109 10:31:58.900445   29322 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 10:31:58.915415   29322 ssh_runner.go:195] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I1109 10:31:58.919528   29322 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:31:58.929108   29322 host.go:66] Checking if "multinode-102528" exists ...
	I1109 10:31:58.929324   29322 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:31:58.929320   29322 start.go:286] JoinCluster: &{Name:multinode-102528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:multinode-102528 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portain
er:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:31:58.929403   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1109 10:31:58.929471   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:31:58.986769   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:31:59.123589   29322 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 
	I1109 10:31:59.128019   29322 start.go:299] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:31:59.128051   29322 host.go:66] Checking if "multinode-102528" exists ...
	I1109 10:31:59.128292   29322 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-102528-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I1109 10:31:59.128367   29322 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:31:59.187027   29322 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62611 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:31:59.326359   29322 command_runner.go:130] > node/multinode-102528-m02 cordoned
	I1109 10:32:02.344214   29322 command_runner.go:130] > pod "busybox-65db55d5d6-qdqrp" has DeletionTimestamp older than 1 seconds, skipping
	I1109 10:32:02.344229   29322 command_runner.go:130] > node/multinode-102528-m02 drained
	I1109 10:32:02.347277   29322 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I1109 10:32:02.347296   29322 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-6kjz8, kube-system/kube-proxy-c4lh6
	I1109 10:32:02.347319   29322 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl drain multinode-102528-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.219088637s)
	I1109 10:32:02.347331   29322 node.go:109] successfully drained node "m02"
	I1109 10:32:02.347678   29322 loader.go:374] Config loaded from file:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:32:02.347880   29322 kapi.go:59] client config for multinode-102528: &rest.Config{Host:"https://127.0.0.1:62610", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/multinode-102528/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:32:02.348140   29322 request.go:1154] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I1109 10:32:02.348177   29322 round_trippers.go:463] DELETE https://127.0.0.1:62610/api/v1/nodes/multinode-102528-m02
	I1109 10:32:02.348182   29322 round_trippers.go:469] Request Headers:
	I1109 10:32:02.348190   29322 round_trippers.go:473]     Accept: application/json, */*
	I1109 10:32:02.348195   29322 round_trippers.go:473]     Content-Type: application/json
	I1109 10:32:02.348200   29322 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I1109 10:32:02.351423   29322 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1109 10:32:02.351434   29322 round_trippers.go:577] Response Headers:
	I1109 10:32:02.351440   29322 round_trippers.go:580]     Cache-Control: no-cache, private
	I1109 10:32:02.351457   29322 round_trippers.go:580]     Content-Type: application/json
	I1109 10:32:02.351466   29322 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: c65aa4c6-29e0-466d-9f88-cbd7dcbc6317
	I1109 10:32:02.351471   29322 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: b5ded558-f0cf-428a-b750-46ffa6811451
	I1109 10:32:02.351476   29322 round_trippers.go:580]     Content-Length: 171
	I1109 10:32:02.351481   29322 round_trippers.go:580]     Date: Wed, 09 Nov 2022 18:32:02 GMT
	I1109 10:32:02.351487   29322 round_trippers.go:580]     Audit-Id: 3c5902c1-7c65-45c9-aa26-c152b6722404
	I1109 10:32:02.351503   29322 request.go:1154] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-102528-m02","kind":"nodes","uid":"e1542fe1-dc88-406c-b080-a5120e5abea2"}}
	I1109 10:32:02.351534   29322 node.go:125] successfully deleted node "m02"
	I1109 10:32:02.351547   29322 start.go:303] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:32:02.351561   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:32:02.351583   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:32:02.387862   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:32:02.507179   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:32:02.507205   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 10:32:02.525754   29322 command_runner.go:130] ! W1109 18:32:02.398731    1115 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:32:02.525769   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:32:02.525777   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:32:02.525785   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:32:02.525791   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:32:02.525798   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:32:02.525808   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:32:02.525814   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1109 10:32:02.525853   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:02.398731    1115 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:02.525861   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:32:02.525869   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:32:02.562594   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:32:02.562610   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:02.562631   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:02.562654   29322 retry.go:31] will retry after 11.04660288s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:02.398731    1115 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:13.609726   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:32:13.609795   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:32:13.647190   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:32:13.746055   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:32:13.746074   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 10:32:13.762908   29322 command_runner.go:130] ! W1109 18:32:13.666500    1750 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:32:13.762922   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:32:13.762932   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:32:13.762937   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:32:13.762946   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:32:13.762952   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:32:13.762962   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:32:13.762967   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1109 10:32:13.762997   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:13.666500    1750 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:13.763008   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:32:13.763015   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:32:13.800788   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:32:13.800816   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:13.800831   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:13.800846   29322 retry.go:31] will retry after 21.607636321s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:13.666500    1750 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:35.408030   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:32:35.408073   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:32:35.444558   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:32:35.543964   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:32:35.543985   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 10:32:35.562373   29322 command_runner.go:130] ! W1109 18:32:35.457675    1988 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:32:35.562388   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:32:35.562398   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:32:35.562403   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:32:35.562408   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:32:35.562413   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:32:35.562423   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:32:35.562429   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1109 10:32:35.562461   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:35.457675    1988 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:35.562473   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:32:35.562482   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:32:35.601057   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:32:35.601078   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:35.601103   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:32:35.601115   29322 retry.go:31] will retry after 26.202601198s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:32:35.457675    1988 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:01.803577   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:33:01.803708   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:33:01.839599   29322 command_runner.go:130] ! W1109 18:33:01.850092    2260 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:33:01.839616   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:33:01.863051   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:33:01.869890   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:33:01.930313   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:33:01.930326   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:33:01.955206   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:33:01.955219   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:01.958721   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:33:01.958736   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:33:01.958743   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	E1109 10:33:01.958769   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:33:01.850092    2260 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:01.958778   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:33:01.958785   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:33:02.001666   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:33:02.001679   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:02.001697   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:02.001708   29322 retry.go:31] will retry after 31.647853817s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:33:01.850092    2260 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:33.648961   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:33:33.649059   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:33:33.686879   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:33:33.790062   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:33:33.790076   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 10:33:33.808514   29322 command_runner.go:130] ! W1109 18:33:33.698660    2589 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:33:33.808529   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:33:33.808542   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:33:33.808549   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:33:33.808554   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:33:33.808562   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:33:33.808573   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:33:33.808580   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1109 10:33:33.808610   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:33:33.698660    2589 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:33.808619   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:33:33.808626   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:33:33.847640   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:33:33.847659   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:33.847674   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:33:33.847685   29322 retry.go:31] will retry after 46.809773289s: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:33:33.698660    2589 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:34:20.658411   29322 start.go:307] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.58.3 Port:0 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:false Worker:true}
	I1109 10:34:20.658560   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02"
	I1109 10:34:20.695495   29322 command_runner.go:130] > [preflight] Running pre-flight checks
	I1109 10:34:20.794031   29322 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I1109 10:34:20.794057   29322 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I1109 10:34:20.813075   29322 command_runner.go:130] ! W1109 18:34:20.697530    3029 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
	I1109 10:34:20.813097   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I1109 10:34:20.813112   29322 command_runner.go:130] ! 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1109 10:34:20.813119   29322 command_runner.go:130] ! 	[WARNING SystemVerification]: missing optional cgroups: blkio
	I1109 10:34:20.813124   29322 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I1109 10:34:20.813131   29322 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I1109 10:34:20.813142   29322 command_runner.go:130] ! error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	I1109 10:34:20.813149   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	E1109 10:34:20.813182   29322 start.go:309] worker node failed to join cluster, will retry: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:34:20.697530    3029 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:34:20.813190   29322 start.go:312] resetting worker node "m02" before attempting to rejoin cluster...
	I1109 10:34:20.813197   29322 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force"
	I1109 10:34:20.850146   29322 command_runner.go:130] ! Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	I1109 10:34:20.850163   29322 command_runner.go:130] ! To see the stack trace of this error execute with --v=5 or higher
	I1109 10:34:20.850183   29322 start.go:314] kubeadm reset failed, continuing anyway: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --force": Process exited with status 1
	stdout:
	
	stderr:
	Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
	To see the stack trace of this error execute with --v=5 or higher
	I1109 10:34:20.850201   29322 start.go:288] JoinCluster complete in 2m21.924612469s
	I1109 10:34:20.872332   29322 out.go:177] 
	W1109 10:34:20.909138   29322 out.go:239] X Exiting due to GUEST_START: adding node: joining cp: error joining worker node to cluster: kubeadm join: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm join control-plane.minikube.internal:8443 --token tnwhsu.2be6clecyx1gx3ku --discovery-token-ca-cert-hash sha256:2868d5215be98146276bde8dad3b7790570b3d7effb1b90c11c864f3d8612f13 --ignore-preflight-errors=all --cri-socket /var/run/cri-dockerd.sock --node-name=multinode-102528-m02": Process exited with status 1
	stdout:
	[preflight] Running pre-flight checks
	[preflight] Reading configuration from the cluster...
	[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	
	stderr:
	W1109 18:34:20.697530    3029 initconfiguration.go:119] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/cri-dockerd.sock". Please update your configuration!
		[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
		[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
		[WARNING SystemVerification]: missing optional cgroups: blkio
		[WARNING Port-10250]: Port 10250 is in use
		[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	error execution phase kubelet-start: a Node with name "multinode-102528-m02" and status "Ready" already exists in the cluster. You must delete the existing Node or change the name of this new joining Node
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 10:34:20.909166   29322 out.go:239] * 
	W1109 10:34:20.910481   29322 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 10:34:21.024791   29322 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-11-09 18:30:49 UTC, end at Wed 2022-11-09 18:34:22 UTC. --
	Nov 09 18:30:52 multinode-102528 dockerd[133]: time="2022-11-09T18:30:52.175783175Z" level=info msg="Daemon shutdown complete"
	Nov 09 18:30:52 multinode-102528 dockerd[133]: time="2022-11-09T18:30:52.175833740Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Nov 09 18:30:52 multinode-102528 systemd[1]: docker.service: Succeeded.
	Nov 09 18:30:52 multinode-102528 systemd[1]: Stopped Docker Application Container Engine.
	Nov 09 18:30:52 multinode-102528 systemd[1]: docker.service: Consumed 1.461s CPU time.
	Nov 09 18:30:52 multinode-102528 systemd[1]: Starting Docker Application Container Engine...
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.227382168Z" level=info msg="Starting up"
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.229113810Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.229147651Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.229165019Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.229172637Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.230270055Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.230341162Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.230381344Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.230415735Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.233608341Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.238905162Z" level=info msg="Loading containers: start."
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.356530105Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.393630820Z" level=info msg="Loading containers: done."
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.405131529Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.405246056Z" level=info msg="Daemon has completed initialization"
	Nov 09 18:30:52 multinode-102528 systemd[1]: Started Docker Application Container Engine.
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.430008658Z" level=info msg="API listen on [::]:2376"
	Nov 09 18:30:52 multinode-102528 dockerd[704]: time="2022-11-09T18:30:52.432847961Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 09 18:31:39 multinode-102528 dockerd[704]: time="2022-11-09T18:31:39.298188502Z" level=info msg="ignoring event" container=23563cc735f1fe4f88c0c714027cdd69a0d95436e0cc578e36a1d0312a247f03 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	24fcc6629fcf3       6e38f40d628db       2 minutes ago       Running             storage-provisioner       3                   3500185558a9c
	27a4a4ae1064b       beaaf00edd38a       3 minutes ago       Running             kube-proxy                2                   234110f8a4f65
	878b9571222e0       d6e3e26021b60       3 minutes ago       Running             kindnet-cni               2                   06779789a4f89
	4396a3b526166       5185b96f0becf       3 minutes ago       Running             coredns                   2                   970a12d5d520e
	23563cc735f1f       6e38f40d628db       3 minutes ago       Exited              storage-provisioner       2                   3500185558a9c
	61247ab363e72       8c811b4aec35f       3 minutes ago       Running             busybox                   2                   7d8b4d1577b3d
	12d9dee23f657       a8a176a5d5d69       3 minutes ago       Running             etcd                      2                   0d07c97258e49
	0ed0040554971       0346dbd74bcb9       3 minutes ago       Running             kube-apiserver            2                   8b6b3e4a1cf2f
	42a75bb8a7d58       6d23ec0e8b87e       3 minutes ago       Running             kube-scheduler            2                   3faaa84eaf20c
	81fc0c4b04cb2       6039992312758       3 minutes ago       Running             kube-controller-manager   2                   00760f51a8ef1
	87217a284b95b       5185b96f0becf       5 minutes ago       Exited              coredns                   1                   f24399907a458
	246636dd97e8b       d6e3e26021b60       5 minutes ago       Exited              kindnet-cni               1                   a72eb1f58fc31
	e3ceefd732b85       8c811b4aec35f       5 minutes ago       Exited              busybox                   1                   5fbe0e04bbffb
	1e9e9464a6547       beaaf00edd38a       5 minutes ago       Exited              kube-proxy                1                   706558a4ed10d
	28b3a05115ad4       6d23ec0e8b87e       5 minutes ago       Exited              kube-scheduler            1                   f969ced4e9d4b
	78e4ea2c8ae02       0346dbd74bcb9       5 minutes ago       Exited              kube-apiserver            1                   b1b331d84fd35
	652c7e303fdd2       6039992312758       5 minutes ago       Exited              kube-controller-manager   1                   efc1daab7958d
	4e785d9e34057       a8a176a5d5d69       5 minutes ago       Exited              etcd                      1                   8b8ad03da153b
	
	* 
	* ==> coredns [4396a3b52616] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> coredns [87217a284b95] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = a1b5920ef1e8e10875eeec3214b810e7e404fdaf6cfe53f31cc42ae1e9ba5884ecf886330489b6b02fba5b37a31406fcb402b2501c7ab0318fc890d74b6fae55
	CoreDNS-1.9.3
	linux/amd64, go1.18.2, 45b0a11
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-102528
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-102528
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b216797ebc629f5d4ea32d96a0fffe1acee1fa4c
	                    minikube.k8s.io/name=multinode-102528
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_11_09T10_25_55_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Nov 2022 18:25:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-102528
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Nov 2022 18:34:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Nov 2022 18:31:08 +0000   Wed, 09 Nov 2022 18:25:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Nov 2022 18:31:08 +0000   Wed, 09 Nov 2022 18:25:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Nov 2022 18:31:08 +0000   Wed, 09 Nov 2022 18:25:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Nov 2022 18:31:08 +0000   Wed, 09 Nov 2022 18:26:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.2
	  Hostname:    multinode-102528
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	System Info:
	  Machine ID:                 996614ec4c814b87b7ec8ebee3d0e8c9
	  System UUID:                fc63a62a-9eaa-4110-9152-858a54a4189a
	  Boot ID:                    fdb96f1f-af28-4634-9005-a24337fbfb7f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-cx4lf                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m31s
	  kube-system                 coredns-565d847f94-fx6lt                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (2%!)(MISSING)     8m16s
	  kube-system                 etcd-multinode-102528                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         8m29s
	  kube-system                 kindnet-9td8m                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      8m16s
	  kube-system                 kube-apiserver-multinode-102528             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 kube-controller-manager-multinode-102528    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m31s
	  kube-system                 kube-proxy-9wsxp                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m16s
	  kube-system                 kube-scheduler-multinode-102528             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m29s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (14%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (3%!)(MISSING)  220Mi (3%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m14s                  kube-proxy       
	  Normal  Starting                 3m13s                  kube-proxy       
	  Normal  Starting                 5m19s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  8m29s                  kubelet          Node multinode-102528 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  8m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    8m29s                  kubelet          Node multinode-102528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m29s                  kubelet          Node multinode-102528 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m29s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m16s                  node-controller  Node multinode-102528 event: Registered Node multinode-102528 in Controller
	  Normal  NodeReady                8m9s                   kubelet          Node multinode-102528 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m25s (x8 over 5m25s)  kubelet          Node multinode-102528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m25s (x8 over 5m25s)  kubelet          Node multinode-102528 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m25s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     5m25s (x7 over 5m25s)  kubelet          Node multinode-102528 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m8s                   node-controller  Node multinode-102528 event: Registered Node multinode-102528 in Controller
	  Normal  Starting                 3m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m25s (x8 over 3m26s)  kubelet          Node multinode-102528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s (x8 over 3m26s)  kubelet          Node multinode-102528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s (x7 over 3m26s)  kubelet          Node multinode-102528 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m3s                   node-controller  Node multinode-102528 event: Registered Node multinode-102528 in Controller
	
	
	Name:               multinode-102528-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-102528-m02
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Nov 2022 18:32:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-102528-m02
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Nov 2022 18:34:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Nov 2022 18:32:02 +0000   Wed, 09 Nov 2022 18:32:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Nov 2022 18:32:02 +0000   Wed, 09 Nov 2022 18:32:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Nov 2022 18:32:02 +0000   Wed, 09 Nov 2022 18:32:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Nov 2022 18:32:02 +0000   Wed, 09 Nov 2022 18:32:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.58.3
	  Hostname:    multinode-102528-m02
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	System Info:
	  Machine ID:                 996614ec4c814b87b7ec8ebee3d0e8c9
	  System UUID:                d53c9378-b195-49be-9c68-0316a7cbcdc2
	  Boot ID:                    fdb96f1f-af28-4634-9005-a24337fbfb7f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-65db55d5d6-c8lbw    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m24s
	  kube-system                 kindnet-6kjz8               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      7m55s
	  kube-system                 kube-proxy-c4lh6            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (1%!)(MISSING)  100m (1%!)(MISSING)
	  memory             50Mi (0%!)(MISSING)  50Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 7m49s                  kube-proxy  
	  Normal  Starting                 2m15s                  kube-proxy  
	  Normal  Starting                 4m47s                  kube-proxy  
	  Normal  NodeHasSufficientMemory  7m55s (x2 over 7m55s)  kubelet     Node multinode-102528-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m55s (x2 over 7m55s)  kubelet     Node multinode-102528-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m55s (x2 over 7m55s)  kubelet     Node multinode-102528-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m55s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 7m55s                  kubelet     Starting kubelet.
	  Normal  NodeReady                7m34s                  kubelet     Node multinode-102528-m02 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  4m51s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  Starting                 4m51s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientPID     4m50s (x2 over 4m51s)  kubelet     Node multinode-102528-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m50s (x2 over 4m51s)  kubelet     Node multinode-102528-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m50s (x2 over 4m51s)  kubelet     Node multinode-102528-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                4m40s                  kubelet     Node multinode-102528-m02 status is now: NodeReady
	  Normal  Starting                 2m27s                  kubelet     Starting kubelet.
	  Normal  NodeAllocatableEnforced  2m27s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m21s (x7 over 2m27s)  kubelet     Node multinode-102528-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x7 over 2m27s)  kubelet     Node multinode-102528-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m27s)  kubelet     Node multinode-102528-m02 status is now: NodeHasSufficientPID
	
	* 
	* ==> dmesg <==
	* [  +0.001687] FS-Cache: O-key=[8] '4a2c840400000000'
	[  +0.001147] FS-Cache: N-cookie c=0000000d [p=00000005 fl=2 nc=0 na=1]
	[  +0.001543] FS-Cache: N-cookie d=0000000093c8c150{9p.inode} n=0000000077758eeb
	[  +0.001696] FS-Cache: N-key=[8] '4a2c840400000000'
	[  +0.002251] FS-Cache: Duplicate cookie detected
	[  +0.001070] FS-Cache: O-cookie c=00000006 [p=00000005 fl=226 nc=0 na=1]
	[  +0.001559] FS-Cache: O-cookie d=0000000093c8c150{9p.inode} n=00000000d95a1a35
	[  +0.001699] FS-Cache: O-key=[8] '4a2c840400000000'
	[  +0.001156] FS-Cache: N-cookie c=0000000e [p=00000005 fl=2 nc=0 na=1]
	[  +0.001537] FS-Cache: N-cookie d=0000000093c8c150{9p.inode} n=0000000091f8b56c
	[  +0.001712] FS-Cache: N-key=[8] '4a2c840400000000'
	[  +2.660019] FS-Cache: Duplicate cookie detected
	[  +0.001070] FS-Cache: O-cookie c=00000008 [p=00000005 fl=226 nc=0 na=1]
	[  +0.001554] FS-Cache: O-cookie d=0000000093c8c150{9p.inode} n=00000000d668824c
	[  +0.001711] FS-Cache: O-key=[8] '492c840400000000'
	[  +0.001149] FS-Cache: N-cookie c=00000011 [p=00000005 fl=2 nc=0 na=1]
	[  +0.001529] FS-Cache: N-cookie d=0000000093c8c150{9p.inode} n=0000000077ea2309
	[  +0.001704] FS-Cache: N-key=[8] '492c840400000000'
	[  +0.398166] FS-Cache: Duplicate cookie detected
	[  +0.001076] FS-Cache: O-cookie c=0000000b [p=00000005 fl=226 nc=0 na=1]
	[  +0.001558] FS-Cache: O-cookie d=0000000093c8c150{9p.inode} n=0000000085c1c4de
	[  +0.001699] FS-Cache: O-key=[8] '542c840400000000'
	[  +0.001135] FS-Cache: N-cookie c=00000012 [p=00000005 fl=2 nc=0 na=1]
	[  +0.001533] FS-Cache: N-cookie d=0000000093c8c150{9p.inode} n=0000000070f92e96
	[  +0.001698] FS-Cache: N-key=[8] '542c840400000000'
	
	* 
	* ==> etcd [12d9dee23f65] <==
	* {"level":"info","ts":"2022-11-09T18:31:05.078Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"b2c6679ac05f2cf1","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-11-09T18:31:05.078Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-11-09T18:31:05.079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 switched to configuration voters=(12882097698489969905)"}
	{"level":"info","ts":"2022-11-09T18:31:05.079Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","added-peer-id":"b2c6679ac05f2cf1","added-peer-peer-urls":["https://192.168.58.2:2380"]}
	{"level":"info","ts":"2022-11-09T18:31:05.079Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3a56e4ca95e2355c","local-member-id":"b2c6679ac05f2cf1","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-09T18:31:05.079Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-09T18:31:05.080Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-11-09T18:31:05.080Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-09T18:31:05.080Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-09T18:31:05.080Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-11-09T18:31:05.080Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-09T18:31:06.373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 3"}
	{"level":"info","ts":"2022-11-09T18:31:06.373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-11-09T18:31:06.373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-11-09T18:31:06.373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 4"}
	{"level":"info","ts":"2022-11-09T18:31:06.373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 4"}
	{"level":"info","ts":"2022-11-09T18:31:06.373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 4"}
	{"level":"info","ts":"2022-11-09T18:31:06.373Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 4"}
	{"level":"info","ts":"2022-11-09T18:31:06.374Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-102528 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-09T18:31:06.374Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-09T18:31:06.374Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-09T18:31:06.375Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-09T18:31:06.375Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-09T18:31:06.375Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-11-09T18:31:06.376Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [4e785d9e3405] <==
	* {"level":"info","ts":"2022-11-09T18:28:59.239Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-09T18:28:59.239Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"b2c6679ac05f2cf1","initial-advertise-peer-urls":["https://192.168.58.2:2380"],"listen-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.58.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-11-09T18:28:59.239Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-09T18:29:00.628Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 is starting a new election at term 2"}
	{"level":"info","ts":"2022-11-09T18:29:00.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became pre-candidate at term 2"}
	{"level":"info","ts":"2022-11-09T18:29:00.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgPreVoteResp from b2c6679ac05f2cf1 at term 2"}
	{"level":"info","ts":"2022-11-09T18:29:00.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became candidate at term 3"}
	{"level":"info","ts":"2022-11-09T18:29:00.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 received MsgVoteResp from b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-11-09T18:29:00.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b2c6679ac05f2cf1 became leader at term 3"}
	{"level":"info","ts":"2022-11-09T18:29:00.629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b2c6679ac05f2cf1 elected leader b2c6679ac05f2cf1 at term 3"}
	{"level":"info","ts":"2022-11-09T18:29:00.629Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"b2c6679ac05f2cf1","local-member-attributes":"{Name:multinode-102528 ClientURLs:[https://192.168.58.2:2379]}","request-path":"/0/members/b2c6679ac05f2cf1/attributes","cluster-id":"3a56e4ca95e2355c","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-09T18:29:00.629Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-09T18:29:00.631Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.58.2:2379"}
	{"level":"info","ts":"2022-11-09T18:29:00.630Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-09T18:29:00.631Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-09T18:29:00.631Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-09T18:29:00.632Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-11-09T18:30:23.896Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-11-09T18:30:23.896Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"multinode-102528","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	WARNING: 2022/11/09 18:30:23 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/11/09 18:30:23 [core] grpc: addrConn.createTransport failed to connect to {192.168.58.2:2379 192.168.58.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.58.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-11-09T18:30:23.928Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"b2c6679ac05f2cf1","current-leader-member-id":"b2c6679ac05f2cf1"}
	{"level":"info","ts":"2022-11-09T18:30:23.952Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-09T18:30:23.953Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.58.2:2380"}
	{"level":"info","ts":"2022-11-09T18:30:23.953Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"multinode-102528","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.58.2:2380"],"advertise-client-urls":["https://192.168.58.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:34:23 up  3:33,  0 users,  load average: 0.01, 0.28, 0.43
	Linux multinode-102528 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [0ed004055497] <==
	* I1109 18:31:07.916623       1 controller.go:85] Starting OpenAPI V3 controller
	I1109 18:31:07.916631       1 naming_controller.go:291] Starting NamingConditionController
	I1109 18:31:07.916722       1 establishing_controller.go:76] Starting EstablishingController
	I1109 18:31:07.916758       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1109 18:31:07.916767       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1109 18:31:07.917285       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1109 18:31:07.917314       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	E1109 18:31:07.945459       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1109 18:31:07.948684       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1109 18:31:07.988228       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I1109 18:31:07.988601       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 18:31:08.032082       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1109 18:31:08.032109       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1109 18:31:08.033294       1 cache.go:39] Caches are synced for autoregister controller
	I1109 18:31:08.032266       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1109 18:31:08.036371       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	I1109 18:31:08.712159       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1109 18:31:08.890724       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 18:31:10.369298       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1109 18:31:10.477527       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1109 18:31:10.484258       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1109 18:31:10.514894       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 18:31:10.519739       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1109 18:31:20.939174       1 controller.go:616] quota admission added evaluator for: endpoints
	I1109 18:31:20.948388       1 controller.go:616] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-apiserver [78e4ea2c8ae0] <==
	*   "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 18:30:23.911313       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	I1109 18:30:23.907209       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I1109 18:30:23.907229       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	W1109 18:30:23.919311       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	
	* 
	* ==> kube-controller-manager [652c7e303fdd] <==
	* I1109 18:29:15.104577       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1109 18:29:15.123795       1 shared_informer.go:262] Caches are synced for disruption
	I1109 18:29:15.186254       1 shared_informer.go:262] Caches are synced for HPA
	I1109 18:29:15.189169       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I1109 18:29:15.194053       1 shared_informer.go:262] Caches are synced for resource quota
	I1109 18:29:15.261361       1 shared_informer.go:262] Caches are synced for resource quota
	I1109 18:29:15.609216       1 shared_informer.go:262] Caches are synced for garbage collector
	I1109 18:29:15.678064       1 shared_informer.go:262] Caches are synced for garbage collector
	I1109 18:29:15.678090       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1109 18:29:29.538474       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-5zmmm"
	W1109 18:29:32.540992       1 topologycache.go:199] Can't get CPU or zone information for multinode-102528-m03 node
	W1109 18:29:33.305249       1 topologycache.go:199] Can't get CPU or zone information for multinode-102528-m03 node
	W1109 18:29:33.305445       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102528-m02" does not exist
	I1109 18:29:33.309955       1 range_allocator.go:367] Set node multinode-102528-m02 PodCIDR to [10.244.1.0/24]
	I1109 18:29:35.706754       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-lbxzv" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-lbxzv"
	W1109 18:29:43.585035       1 topologycache.go:199] Can't get CPU or zone information for multinode-102528-m02 node
	I1109 18:29:50.938770       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-qdqrp"
	W1109 18:29:53.968524       1 topologycache.go:199] Can't get CPU or zone information for multinode-102528-m02 node
	W1109 18:29:54.488841       1 topologycache.go:199] Can't get CPU or zone information for multinode-102528-m02 node
	W1109 18:29:54.489083       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102528-m03" does not exist
	I1109 18:29:54.496537       1 range_allocator.go:367] Set node multinode-102528-m03 PodCIDR to [10.244.2.0/24]
	I1109 18:29:57.691856       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-5zmmm" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-65db55d5d6-5zmmm"
	W1109 18:30:04.589458       1 topologycache.go:199] Can't get CPU or zone information for multinode-102528-m02 node
	W1109 18:30:07.336791       1 topologycache.go:199] Can't get CPU or zone information for multinode-102528-m02 node
	I1109 18:30:10.079081       1 event.go:294] "Event occurred" object="multinode-102528-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-102528-m03 event: Removing Node multinode-102528-m03 from Controller"
	
	* 
	* ==> kube-controller-manager [81fc0c4b04cb] <==
	* I1109 18:31:20.931692       1 shared_informer.go:262] Caches are synced for node
	I1109 18:31:20.931722       1 range_allocator.go:166] Starting range CIDR allocator
	I1109 18:31:20.931726       1 shared_informer.go:255] Waiting for caches to sync for cidrallocator
	I1109 18:31:20.931731       1 shared_informer.go:262] Caches are synced for cidrallocator
	I1109 18:31:20.933129       1 shared_informer.go:262] Caches are synced for endpoint
	I1109 18:31:20.937618       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I1109 18:31:20.940037       1 shared_informer.go:262] Caches are synced for daemon sets
	I1109 18:31:20.952208       1 shared_informer.go:262] Caches are synced for attach detach
	I1109 18:31:21.134481       1 shared_informer.go:262] Caches are synced for resource quota
	I1109 18:31:21.137561       1 shared_informer.go:262] Caches are synced for resource quota
	I1109 18:31:21.450526       1 shared_informer.go:262] Caches are synced for garbage collector
	I1109 18:31:21.473211       1 shared_informer.go:262] Caches are synced for garbage collector
	I1109 18:31:21.473278       1 garbagecollector.go:163] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1109 18:31:59.353924       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-65db55d5d6-c8lbw"
	I1109 18:32:00.859307       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kindnet-z66sn"
	I1109 18:32:00.863758       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-z66sn"
	I1109 18:32:00.863790       1 gc_controller.go:324] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-kh6r6"
	I1109 18:32:00.867905       1 gc_controller.go:252] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-kh6r6"
	I1109 18:32:00.918560       1 event.go:294] "Event occurred" object="multinode-102528-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-102528-m02 status is now: NodeNotReady"
	I1109 18:32:00.922258       1 event.go:294] "Event occurred" object="kube-system/kube-proxy-c4lh6" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1109 18:32:00.926210       1 event.go:294] "Event occurred" object="kube-system/kindnet-6kjz8" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I1109 18:32:00.932320       1 event.go:294] "Event occurred" object="default/busybox-65db55d5d6-qdqrp" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	W1109 18:32:02.484980       1 topologycache.go:199] Can't get CPU or zone information for multinode-102528-m02 node
	W1109 18:32:02.485118       1 actual_state_of_world.go:541] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="multinode-102528-m02" does not exist
	I1109 18:32:02.488116       1 range_allocator.go:367] Set node multinode-102528-m02 PodCIDR to [10.244.1.0/24]
	
	* 
	* ==> kube-proxy [1e9e9464a654] <==
	* I1109 18:29:04.133963       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1109 18:29:04.134033       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1109 18:29:04.134052       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1109 18:29:04.165534       1 server_others.go:206] "Using iptables Proxier"
	I1109 18:29:04.165574       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1109 18:29:04.165582       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1109 18:29:04.165648       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1109 18:29:04.165667       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1109 18:29:04.165933       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1109 18:29:04.166252       1 server.go:661] "Version info" version="v1.25.3"
	I1109 18:29:04.166286       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 18:29:04.167275       1 config.go:317] "Starting service config controller"
	I1109 18:29:04.167365       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1109 18:29:04.167384       1 config.go:226] "Starting endpoint slice config controller"
	I1109 18:29:04.167388       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1109 18:29:04.226005       1 config.go:444] "Starting node config controller"
	I1109 18:29:04.226067       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1109 18:29:04.268432       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1109 18:29:04.268484       1 shared_informer.go:262] Caches are synced for service config
	I1109 18:29:04.326504       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-proxy [27a4a4ae1064] <==
	* I1109 18:31:09.903634       1 node.go:163] Successfully retrieved node IP: 192.168.58.2
	I1109 18:31:09.903697       1 server_others.go:138] "Detected node IP" address="192.168.58.2"
	I1109 18:31:09.903771       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1109 18:31:09.948787       1 server_others.go:206] "Using iptables Proxier"
	I1109 18:31:09.948833       1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4
	I1109 18:31:09.951044       1 server_others.go:214] "Creating dualStackProxier for iptables"
	I1109 18:31:09.951066       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
	I1109 18:31:09.951113       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1109 18:31:09.951284       1 proxier.go:262] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1109 18:31:09.951435       1 server.go:661] "Version info" version="v1.25.3"
	I1109 18:31:09.951464       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 18:31:09.952789       1 config.go:317] "Starting service config controller"
	I1109 18:31:09.952818       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1109 18:31:09.952855       1 config.go:226] "Starting endpoint slice config controller"
	I1109 18:31:09.952859       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1109 18:31:09.953886       1 config.go:444] "Starting node config controller"
	I1109 18:31:09.953922       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1109 18:31:10.053735       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I1109 18:31:10.053765       1 shared_informer.go:262] Caches are synced for service config
	I1109 18:31:10.053984       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [28b3a05115ad] <==
	* I1109 18:29:00.170410       1 serving.go:348] Generated self-signed cert in-memory
	I1109 18:29:02.231591       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1109 18:29:02.231652       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 18:29:02.234276       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1109 18:29:02.234430       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1109 18:29:02.234385       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1109 18:29:02.234494       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1109 18:29:02.234402       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 18:29:02.234514       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 18:29:02.234409       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 18:29:02.234728       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1109 18:29:02.335233       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I1109 18:29:02.335239       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 18:29:02.335272       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	E1109 18:30:23.901235       1 scheduling_queue.go:963] "Error while retrieving next pod from scheduling queue" err="scheduling queue is closed"
	I1109 18:30:23.901482       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1109 18:30:23.901504       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I1109 18:30:23.901641       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 18:30:23.901662       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1109 18:30:23.901844       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [42a75bb8a7d5] <==
	* I1109 18:31:00.372294       1 serving.go:348] Generated self-signed cert in-memory
	W1109 18:31:07.918055       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 18:31:07.918153       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 18:31:07.918175       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 18:31:07.918187       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 18:31:07.941998       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1109 18:31:07.942037       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 18:31:07.946467       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1109 18:31:07.947397       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1109 18:31:07.947584       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 18:31:07.947653       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	W1109 18:31:07.949305       1 reflector.go:424] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1109 18:31:07.949357       1 reflector.go:140] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1109 18:31:09.348628       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-11-09 18:30:49 UTC, end at Wed 2022-11-09 18:34:24 UTC. --
	Nov 09 18:31:07 multinode-102528 kubelet[1259]: I1109 18:31:07.954267    1259 topology_manager.go:205] "Topology Admit Handler"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.032433    1259 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.033502    1259 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.133279    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjwcl\" (UniqueName: \"kubernetes.io/projected/5c5e247e-06db-434c-af4a-91a2c2a08779-kube-api-access-hjwcl\") pod \"storage-provisioner\" (UID: \"5c5e247e-06db-434c-af4a-91a2c2a08779\") " pod="kube-system/storage-provisioner"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.133704    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb563027-b991-4b95-921a-ee4687934118-xtables-lock\") pod \"kindnet-9td8m\" (UID: \"bb563027-b991-4b95-921a-ee4687934118\") " pod="kube-system/kindnet-9td8m"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.133731    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4t7q\" (UniqueName: \"kubernetes.io/projected/bb563027-b991-4b95-921a-ee4687934118-kube-api-access-s4t7q\") pod \"kindnet-9td8m\" (UID: \"bb563027-b991-4b95-921a-ee4687934118\") " pod="kube-system/kindnet-9td8m"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.133750    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/03c6822b-9fef-4fa3-82a3-bb5082cf31b3-kube-proxy\") pod \"kube-proxy-9wsxp\" (UID: \"03c6822b-9fef-4fa3-82a3-bb5082cf31b3\") " pod="kube-system/kube-proxy-9wsxp"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.133757    1259 kubelet_node_status.go:108] "Node was previously registered" node="multinode-102528"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.133764    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4rqw\" (UniqueName: \"kubernetes.io/projected/03c6822b-9fef-4fa3-82a3-bb5082cf31b3-kube-api-access-g4rqw\") pod \"kube-proxy-9wsxp\" (UID: \"03c6822b-9fef-4fa3-82a3-bb5082cf31b3\") " pod="kube-system/kube-proxy-9wsxp"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.133966    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/03c6822b-9fef-4fa3-82a3-bb5082cf31b3-xtables-lock\") pod \"kube-proxy-9wsxp\" (UID: \"03c6822b-9fef-4fa3-82a3-bb5082cf31b3\") " pod="kube-system/kube-proxy-9wsxp"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.133983    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmmfk\" (UniqueName: \"kubernetes.io/projected/680c8c15-39e0-4143-8dfd-30727e628800-kube-api-access-gmmfk\") pod \"coredns-565d847f94-fx6lt\" (UID: \"680c8c15-39e0-4143-8dfd-30727e628800\") " pod="kube-system/coredns-565d847f94-fx6lt"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.133996    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5c5e247e-06db-434c-af4a-91a2c2a08779-tmp\") pod \"storage-provisioner\" (UID: \"5c5e247e-06db-434c-af4a-91a2c2a08779\") " pod="kube-system/storage-provisioner"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.134009    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bb563027-b991-4b95-921a-ee4687934118-cni-cfg\") pod \"kindnet-9td8m\" (UID: \"bb563027-b991-4b95-921a-ee4687934118\") " pod="kube-system/kindnet-9td8m"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.134026    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb563027-b991-4b95-921a-ee4687934118-lib-modules\") pod \"kindnet-9td8m\" (UID: \"bb563027-b991-4b95-921a-ee4687934118\") " pod="kube-system/kindnet-9td8m"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.134040    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/03c6822b-9fef-4fa3-82a3-bb5082cf31b3-lib-modules\") pod \"kube-proxy-9wsxp\" (UID: \"03c6822b-9fef-4fa3-82a3-bb5082cf31b3\") " pod="kube-system/kube-proxy-9wsxp"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.134064    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/680c8c15-39e0-4143-8dfd-30727e628800-config-volume\") pod \"coredns-565d847f94-fx6lt\" (UID: \"680c8c15-39e0-4143-8dfd-30727e628800\") " pod="kube-system/coredns-565d847f94-fx6lt"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.134083    1259 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvdqz\" (UniqueName: \"kubernetes.io/projected/6f7c0b0f-9f4e-467c-9f08-88ea1ee112b4-kube-api-access-rvdqz\") pod \"busybox-65db55d5d6-cx4lf\" (UID: \"6f7c0b0f-9f4e-467c-9f08-88ea1ee112b4\") " pod="default/busybox-65db55d5d6-cx4lf"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.134094    1259 reconciler.go:169] "Reconciler: start to sync state"
	Nov 09 18:31:08 multinode-102528 kubelet[1259]: I1109 18:31:08.134295    1259 kubelet_node_status.go:73] "Successfully registered node" node="multinode-102528"
	Nov 09 18:31:09 multinode-102528 kubelet[1259]: I1109 18:31:09.329472    1259 request.go:682] Waited for 1.093175077s due to client-side throttling, not priority and fairness, request: POST:https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/serviceaccounts/kube-proxy/token
	Nov 09 18:31:09 multinode-102528 kubelet[1259]: I1109 18:31:09.550416    1259 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="06779789a4f898bd77b41392ae87a73b9cae3ae374c259dc976a935699cde81b"
	Nov 09 18:31:39 multinode-102528 kubelet[1259]: I1109 18:31:39.787510    1259 scope.go:115] "RemoveContainer" containerID="acd607123986a27dfceff702c50f437300761a8ac6af73ed606b59cac8cc27f7"
	Nov 09 18:31:39 multinode-102528 kubelet[1259]: I1109 18:31:39.787780    1259 scope.go:115] "RemoveContainer" containerID="23563cc735f1fe4f88c0c714027cdd69a0d95436e0cc578e36a1d0312a247f03"
	Nov 09 18:31:39 multinode-102528 kubelet[1259]: E1109 18:31:39.787941    1259 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5c5e247e-06db-434c-af4a-91a2c2a08779)\"" pod="kube-system/storage-provisioner" podUID=5c5e247e-06db-434c-af4a-91a2c2a08779
	Nov 09 18:31:55 multinode-102528 kubelet[1259]: I1109 18:31:55.045453    1259 scope.go:115] "RemoveContainer" containerID="23563cc735f1fe4f88c0c714027cdd69a0d95436e0cc578e36a1d0312a247f03"
	
	* 
	* ==> storage-provisioner [23563cc735f1] <==
	* I1109 18:31:09.301587       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1109 18:31:39.282245       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [24fcc6629fcf] <==
	* I1109 18:31:55.153614       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1109 18:31:55.162918       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1109 18:31:55.162985       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1109 18:32:12.559433       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1109 18:32:12.559572       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-102528_825a7a59-2cf8-4fe2-b15d-22ebc3cbebb1!
	I1109 18:32:12.559904       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ad4cde51-5511-4aa3-8004-a8a5c1b8fb99", APIVersion:"v1", ResourceVersion:"1166", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-102528_825a7a59-2cf8-4fe2-b15d-22ebc3cbebb1 became leader
	I1109 18:32:12.661680       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-102528_825a7a59-2cf8-4fe2-b15d-22ebc3cbebb1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-102528 -n multinode-102528
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-102528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: 
helpers_test.go:272: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context multinode-102528 describe pod 
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context multinode-102528 describe pod : exit status 1 (38.070527ms)

                                                
                                                
** stderr ** 
	error: resource name may not be empty

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context multinode-102528 describe pod : exit status 1
--- FAIL: TestMultiNode/serial/RestartMultiNode (217.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (66.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:127: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3459117124.exe start -p running-upgrade-104257 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3459117124.exe start -p running-upgrade-104257 --memory=2200 --vm-driver=docker : exit status 70 (50.809996589s)

                                                
                                                
-- stdout --
	! [running-upgrade-104257] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig806276016
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 18:43:30.991472924 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "running-upgrade-104257" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 18:43:47.127256036 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p running-upgrade-104257", then "minikube start -p running-upgrade-104257 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.28.0 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.28.0
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 13.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 34.42 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 50.03 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 66.62 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 83.23 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 99.66 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 116.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 137.53 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 160.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 180.14 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 196.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 215.02 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 232.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 251.05 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 266.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 278.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 292.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 306.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 322.27 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 337.22 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 352.78 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 369.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 387.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 405.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 421.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 435.80 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 441.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 455.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 467.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 481.48 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 505.58 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 518.44 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 531.75 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 18:43:47.127256036 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3459117124.exe start -p running-upgrade-104257 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3459117124.exe start -p running-upgrade-104257 --memory=2200 --vm-driver=docker : exit status 70 (4.459208254s)

                                                
                                                
-- stdout --
	* [running-upgrade-104257] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1874631345
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-104257" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:127: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3459117124.exe start -p running-upgrade-104257 --memory=2200 --vm-driver=docker 
E1109 10:43:59.478487   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
version_upgrade_test.go:127: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3459117124.exe start -p running-upgrade-104257 --memory=2200 --vm-driver=docker : exit status 70 (4.405184283s)

                                                
                                                
-- stdout --
	* [running-upgrade-104257] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig4171562432
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "running-upgrade-104257" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: legacy v1.9.0 start failed: exit status 70
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2022-11-09 10:44:01.207518 -0800 PST m=+2466.223964050
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestRunningBinaryUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect running-upgrade-104257
helpers_test.go:235: (dbg) docker inspect running-upgrade-104257:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1ea1ba077a9016e75d982402e996e15741e675abac5094035944f700c83f8398",
	        "Created": "2022-11-09T18:43:39.162317372Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 156980,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T18:43:39.384541893Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/1ea1ba077a9016e75d982402e996e15741e675abac5094035944f700c83f8398/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1ea1ba077a9016e75d982402e996e15741e675abac5094035944f700c83f8398/hostname",
	        "HostsPath": "/var/lib/docker/containers/1ea1ba077a9016e75d982402e996e15741e675abac5094035944f700c83f8398/hosts",
	        "LogPath": "/var/lib/docker/containers/1ea1ba077a9016e75d982402e996e15741e675abac5094035944f700c83f8398/1ea1ba077a9016e75d982402e996e15741e675abac5094035944f700c83f8398-json.log",
	        "Name": "/running-upgrade-104257",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "running-upgrade-104257:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/450e4d98f8267421896077ea93ccd214fde0eb4dde8427e426f6a722aed75884-init/diff:/var/lib/docker/overlay2/adf43e9e5a547cba3c81ad58a13ce0cabb8055b47dda1a9773228c197ec1bb25/diff:/var/lib/docker/overlay2/a1ad5d662585fd0755745df98e9dd560eae4f83c17196b6705c401b8849560b6/diff:/var/lib/docker/overlay2/b65ab4d9180e458cb3c5d95a7f1611604108a93911873b6eacf99b21f0d79e13/diff:/var/lib/docker/overlay2/6711ed93a15419121e1596eb52e5b3fbb1c3260b5a70286ea862a6bed2498c18/diff:/var/lib/docker/overlay2/0b7f62812d319cafd3b0ecdc5a69625456e984495e6d8270525b24d6b5305a8b/diff:/var/lib/docker/overlay2/fe0b0fd4637acce13df953451faf7cf44c212c3297e795bf4779ad9b78586bf2/diff:/var/lib/docker/overlay2/abb86979eb3adb5617ae06982ce015514373c1a11c53c26a153e9eb9a400136a/diff:/var/lib/docker/overlay2/5b492a5954a50ffc8a17f27a1a143699d0581698e4c2545bf358e41c85bbb913/diff:/var/lib/docker/overlay2/697ebbe64c558705ec8c95f4d52062873e4ab55bdc468bd3e8744cafb216c019/diff:/var/lib/docker/overlay2/eafa9c
71f13dca2cdb5dfbdc82a8a610719008921b2705037fffef109c385b6b/diff:/var/lib/docker/overlay2/65596f0e992c7c35b135f52ae662842139208fecea410c13bf51af9560c1aec6/diff:/var/lib/docker/overlay2/933de91df26a86644ba18fc45850233a1067fa9a9eff2db7a27fab1fd3af8ad9/diff:/var/lib/docker/overlay2/c649483d5cd065cfaa2632de07db045e8cd2c5fb99591e275b01612a4f04e3e6/diff:/var/lib/docker/overlay2/536487bd91bb8f1bd9ef31e39eb56585d1e257d2611bd045a5222a8b024dd7ff/diff:/var/lib/docker/overlay2/15d7006816a41bb58165751d0ccd0d90c91446a6ef8af9228eeaaad9aaa9318a/diff:/var/lib/docker/overlay2/1718e1e95c0786770e4af9b495368e8bfbe0997247281b37064f4beab1086ae0/diff:/var/lib/docker/overlay2/cb4b763a95cd268ecd1734e850256b870a257a506bf8d0154718c2906c11a29f/diff:/var/lib/docker/overlay2/13625002c8224e020493b9afd73b65e21a2bab1396039b2c64126a9f2efc41ed/diff:/var/lib/docker/overlay2/0b5b5d8421147188580f9e20f66b73eaacace1c53792c825c87b6a86e7db6863/diff:/var/lib/docker/overlay2/927b73608b8daedf14b9314365c7341e0bc477aa7479891bff1559a65b7838dc/diff:/var/lib/d
ocker/overlay2/0afc4fde9995e4abd1c497a7eb8b9a854510ccf9d2a1a54d520a04bae419c751/diff",
	                "MergedDir": "/var/lib/docker/overlay2/450e4d98f8267421896077ea93ccd214fde0eb4dde8427e426f6a722aed75884/merged",
	                "UpperDir": "/var/lib/docker/overlay2/450e4d98f8267421896077ea93ccd214fde0eb4dde8427e426f6a722aed75884/diff",
	                "WorkDir": "/var/lib/docker/overlay2/450e4d98f8267421896077ea93ccd214fde0eb4dde8427e426f6a722aed75884/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "running-upgrade-104257",
	                "Source": "/var/lib/docker/volumes/running-upgrade-104257/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "running-upgrade-104257",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "running-upgrade-104257",
	                "name.minikube.sigs.k8s.io": "running-upgrade-104257",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cad27c48426c35b750b2f14296affa396f6cb96187f12f4c485f239ca0f1d210",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63475"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63476"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63477"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/cad27c48426c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "727bc6b4f5ce063fd4e1ef0a448cce7cf30db1d024605a5381774e6e8f2d8cea",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "e8d9424b02579a850439499a33cee1cdbc22bc61e600dad623d03c6ba7a693ad",
	                    "EndpointID": "727bc6b4f5ce063fd4e1ef0a448cce7cf30db1d024605a5381774e6e8f2d8cea",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-104257 -n running-upgrade-104257
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p running-upgrade-104257 -n running-upgrade-104257: exit status 6 (390.110662ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 10:44:01.644942   32486 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-104257" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "running-upgrade-104257" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-104257" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-104257
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-104257: (2.33222931s)
--- FAIL: TestRunningBinaryUpgrade (66.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (577.55s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-104454 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker 
E1109 10:45:01.931971   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:01.938109   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:01.949143   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:01.969368   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:02.009443   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:02.090488   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:02.250902   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:02.572378   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:03.212896   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:04.493581   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:07.054192   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:12.174383   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:45:22.415188   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:229: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-104454 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109 (4m9.535014283s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-104454] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node kubernetes-upgrade-104454 in cluster kubernetes-upgrade-104454
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 10:44:54.541355   32853 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:44:54.541546   32853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:44:54.541551   32853 out.go:309] Setting ErrFile to fd 2...
	I1109 10:44:54.541556   32853 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:44:54.541656   32853 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:44:54.542216   32853 out.go:303] Setting JSON to false
	I1109 10:44:54.561411   32853 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":13469,"bootTime":1668006025,"procs":393,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 10:44:54.561539   32853 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 10:44:54.583657   32853 out.go:177] * [kubernetes-upgrade-104454] minikube v1.28.0 on Darwin 13.0
	I1109 10:44:54.626295   32853 notify.go:220] Checking for updates...
	I1109 10:44:54.648383   32853 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 10:44:54.669309   32853 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:44:54.690542   32853 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 10:44:54.712564   32853 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 10:44:54.734567   32853 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 10:44:54.756300   32853 config.go:180] Loaded profile config "cert-expiration-104155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:44:54.756434   32853 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 10:44:54.818237   32853 docker.go:137] docker version: linux-20.10.20
	I1109 10:44:54.818372   32853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:44:54.958192   32853 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 18:44:54.874144102 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:44:55.000267   32853 out.go:177] * Using the docker driver based on user configuration
	I1109 10:44:55.021256   32853 start.go:282] selected driver: docker
	I1109 10:44:55.021284   32853 start.go:808] validating driver "docker" against <nil>
	I1109 10:44:55.021321   32853 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 10:44:55.025095   32853 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:44:55.165234   32853 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:68 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 18:44:55.081920324 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:44:55.165366   32853 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1109 10:44:55.165495   32853 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 10:44:55.187198   32853 out.go:177] * Using Docker Desktop driver with root privileges
	I1109 10:44:55.208911   32853 cni.go:95] Creating CNI manager for ""
	I1109 10:44:55.208946   32853 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 10:44:55.208971   32853 start_flags.go:317] config:
	{Name:kubernetes-upgrade-104454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-104454 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:44:55.230795   32853 out.go:177] * Starting control plane node kubernetes-upgrade-104454 in cluster kubernetes-upgrade-104454
	I1109 10:44:55.273059   32853 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 10:44:55.308921   32853 out.go:177] * Pulling base image ...
	I1109 10:44:55.351050   32853 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 10:44:55.351089   32853 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 10:44:55.351146   32853 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1109 10:44:55.351166   32853 cache.go:57] Caching tarball of preloaded images
	I1109 10:44:55.351414   32853 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 10:44:55.351433   32853 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1109 10:44:55.352485   32853 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/config.json ...
	I1109 10:44:55.352635   32853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/config.json: {Name:mk4d70459f3adb6921416ca05bab5355c47d1fa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:44:55.407290   32853 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 10:44:55.407332   32853 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 10:44:55.407342   32853 cache.go:208] Successfully downloaded all kic artifacts
	I1109 10:44:55.407399   32853 start.go:364] acquiring machines lock for kubernetes-upgrade-104454: {Name:mk1c8e548782a85bd1897e2b49ce600df1a310a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 10:44:55.407566   32853 start.go:368] acquired machines lock for "kubernetes-upgrade-104454" in 154.762µs
	I1109 10:44:55.407601   32853 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-104454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-104454 Namespace:default APIServerName:minikubeCA A
PIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/b
in/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1109 10:44:55.407659   32853 start.go:125] createHost starting for "" (driver="docker")
	I1109 10:44:55.449633   32853 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1109 10:44:55.450088   32853 start.go:159] libmachine.API.Create for "kubernetes-upgrade-104454" (driver="docker")
	I1109 10:44:55.450131   32853 client.go:168] LocalClient.Create starting
	I1109 10:44:55.450327   32853 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem
	I1109 10:44:55.450415   32853 main.go:134] libmachine: Decoding PEM data...
	I1109 10:44:55.450446   32853 main.go:134] libmachine: Parsing certificate...
	I1109 10:44:55.450554   32853 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem
	I1109 10:44:55.450616   32853 main.go:134] libmachine: Decoding PEM data...
	I1109 10:44:55.450638   32853 main.go:134] libmachine: Parsing certificate...
	I1109 10:44:55.451354   32853 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-104454 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 10:44:55.506065   32853 cli_runner.go:211] docker network inspect kubernetes-upgrade-104454 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 10:44:55.506178   32853 network_create.go:272] running [docker network inspect kubernetes-upgrade-104454] to gather additional debugging logs...
	I1109 10:44:55.506193   32853 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-104454
	W1109 10:44:55.562230   32853 cli_runner.go:211] docker network inspect kubernetes-upgrade-104454 returned with exit code 1
	I1109 10:44:55.562259   32853 network_create.go:275] error running [docker network inspect kubernetes-upgrade-104454]: docker network inspect kubernetes-upgrade-104454: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubernetes-upgrade-104454
	I1109 10:44:55.562280   32853 network_create.go:277] output of [docker network inspect kubernetes-upgrade-104454]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubernetes-upgrade-104454
	
	** /stderr **
	I1109 10:44:55.562407   32853 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 10:44:55.618372   32853 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000012758] misses:0}
	I1109 10:44:55.618411   32853 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 10:44:55.618423   32853 network_create.go:115] attempt to create docker network kubernetes-upgrade-104454 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 10:44:55.618522   32853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-104454 kubernetes-upgrade-104454
	W1109 10:44:55.671519   32853 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-104454 kubernetes-upgrade-104454 returned with exit code 1
	W1109 10:44:55.671555   32853 network_create.go:107] failed to create docker network kubernetes-upgrade-104454 192.168.49.0/24, will retry: subnet is taken
	I1109 10:44:55.671821   32853 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012758] amended:false}} dirty:map[] misses:0}
	I1109 10:44:55.671838   32853 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 10:44:55.672059   32853 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012758] amended:true}} dirty:map[192.168.49.0:0xc000012758 192.168.58.0:0xc000a32510] misses:0}
	I1109 10:44:55.672072   32853 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 10:44:55.672084   32853 network_create.go:115] attempt to create docker network kubernetes-upgrade-104454 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1109 10:44:55.672181   32853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-104454 kubernetes-upgrade-104454
	W1109 10:44:55.725518   32853 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-104454 kubernetes-upgrade-104454 returned with exit code 1
	W1109 10:44:55.725563   32853 network_create.go:107] failed to create docker network kubernetes-upgrade-104454 192.168.58.0/24, will retry: subnet is taken
	I1109 10:44:55.725824   32853 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012758] amended:true}} dirty:map[192.168.49.0:0xc000012758 192.168.58.0:0xc000a32510] misses:1}
	I1109 10:44:55.725840   32853 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 10:44:55.726047   32853 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000012758] amended:true}} dirty:map[192.168.49.0:0xc000012758 192.168.58.0:0xc000a32510 192.168.67.0:0xc0001105c0] misses:1}
	I1109 10:44:55.726058   32853 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 10:44:55.726067   32853 network_create.go:115] attempt to create docker network kubernetes-upgrade-104454 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1109 10:44:55.726164   32853 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-104454 kubernetes-upgrade-104454
	I1109 10:44:55.811046   32853 network_create.go:99] docker network kubernetes-upgrade-104454 192.168.67.0/24 created
	I1109 10:44:55.811083   32853 kic.go:106] calculated static IP "192.168.67.2" for the "kubernetes-upgrade-104454" container
	I1109 10:44:55.811217   32853 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 10:44:55.866144   32853 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-104454 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-104454 --label created_by.minikube.sigs.k8s.io=true
	I1109 10:44:55.919941   32853 oci.go:103] Successfully created a docker volume kubernetes-upgrade-104454
	I1109 10:44:55.920070   32853 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-104454-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-104454 --entrypoint /usr/bin/test -v kubernetes-upgrade-104454:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1109 10:44:56.344709   32853 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-104454
	I1109 10:44:56.344744   32853 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 10:44:56.344758   32853 kic.go:179] Starting extracting preloaded images to volume ...
	I1109 10:44:56.344897   32853 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-104454:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 10:45:00.496990   32853 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-104454:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.152109645s)
	I1109 10:45:00.497020   32853 kic.go:188] duration metric: took 4.152364 seconds to extract preloaded images to volume
	I1109 10:45:00.497151   32853 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 10:45:00.635777   32853 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-104454 --name kubernetes-upgrade-104454 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-104454 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-104454 --network kubernetes-upgrade-104454 --ip 192.168.67.2 --volume kubernetes-upgrade-104454:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1109 10:45:00.980128   32853 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104454 --format={{.State.Running}}
	I1109 10:45:01.038154   32853 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104454 --format={{.State.Status}}
	I1109 10:45:01.096832   32853 cli_runner.go:164] Run: docker exec kubernetes-upgrade-104454 stat /var/lib/dpkg/alternatives/iptables
	I1109 10:45:01.201902   32853 oci.go:144] the created container "kubernetes-upgrade-104454" has a running status.
	I1109 10:45:01.201942   32853 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa...
	I1109 10:45:01.389042   32853 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 10:45:01.490937   32853 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104454 --format={{.State.Status}}
	I1109 10:45:01.547857   32853 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 10:45:01.547876   32853 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-104454 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 10:45:01.653916   32853 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104454 --format={{.State.Status}}
	I1109 10:45:01.709586   32853 machine.go:88] provisioning docker machine ...
	I1109 10:45:01.709639   32853 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-104454"
	I1109 10:45:01.709765   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:01.765843   32853 main.go:134] libmachine: Using SSH client type: native
	I1109 10:45:01.766051   32853 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 63592 <nil> <nil>}
	I1109 10:45:01.766065   32853 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-104454 && echo "kubernetes-upgrade-104454" | sudo tee /etc/hostname
	I1109 10:45:01.894108   32853 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-104454
	
	I1109 10:45:01.894215   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:01.950525   32853 main.go:134] libmachine: Using SSH client type: native
	I1109 10:45:01.951181   32853 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 63592 <nil> <nil>}
	I1109 10:45:01.951205   32853 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-104454' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-104454/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-104454' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 10:45:02.069697   32853 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:45:02.069720   32853 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 10:45:02.069748   32853 ubuntu.go:177] setting up certificates
	I1109 10:45:02.069755   32853 provision.go:83] configureAuth start
	I1109 10:45:02.069855   32853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104454
	I1109 10:45:02.127355   32853 provision.go:138] copyHostCerts
	I1109 10:45:02.127460   32853 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 10:45:02.127469   32853 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:45:02.127571   32853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 10:45:02.127791   32853 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 10:45:02.127798   32853 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:45:02.127864   32853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 10:45:02.128015   32853 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 10:45:02.128021   32853 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:45:02.128084   32853 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 10:45:02.128208   32853 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-104454 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-104454]
	I1109 10:45:02.187754   32853 provision.go:172] copyRemoteCerts
	I1109 10:45:02.187820   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 10:45:02.187884   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:02.244938   32853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63592 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:45:02.330416   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 10:45:02.348062   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 10:45:02.364945   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 10:45:02.382129   32853 provision.go:86] duration metric: configureAuth took 312.367217ms
	I1109 10:45:02.382141   32853 ubuntu.go:193] setting minikube options for container-runtime
	I1109 10:45:02.382312   32853 config.go:180] Loaded profile config "kubernetes-upgrade-104454": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1109 10:45:02.382404   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:02.439250   32853 main.go:134] libmachine: Using SSH client type: native
	I1109 10:45:02.439408   32853 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 63592 <nil> <nil>}
	I1109 10:45:02.439423   32853 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 10:45:02.557244   32853 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 10:45:02.557257   32853 ubuntu.go:71] root file system type: overlay
	I1109 10:45:02.557419   32853 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 10:45:02.557529   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:02.614923   32853 main.go:134] libmachine: Using SSH client type: native
	I1109 10:45:02.615087   32853 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 63592 <nil> <nil>}
	I1109 10:45:02.615132   32853 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 10:45:02.740754   32853 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 10:45:02.740873   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:02.797762   32853 main.go:134] libmachine: Using SSH client type: native
	I1109 10:45:02.797921   32853 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 63592 <nil> <nil>}
	I1109 10:45:02.797934   32853 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 10:45:03.386012   32853 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 18:45:02.750360041 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1109 10:45:03.386037   32853 machine.go:91] provisioned docker machine in 1.676463804s
	I1109 10:45:03.386043   32853 client.go:171] LocalClient.Create took 7.936115376s
	I1109 10:45:03.386062   32853 start.go:167] duration metric: libmachine.API.Create for "kubernetes-upgrade-104454" took 7.936187717s
	I1109 10:45:03.386072   32853 start.go:300] post-start starting for "kubernetes-upgrade-104454" (driver="docker")
	I1109 10:45:03.386077   32853 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 10:45:03.386160   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 10:45:03.386235   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:03.445495   32853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63592 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:45:03.531579   32853 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 10:45:03.535185   32853 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 10:45:03.535201   32853 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 10:45:03.535208   32853 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 10:45:03.535213   32853 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 10:45:03.535223   32853 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 10:45:03.535322   32853 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 10:45:03.535516   32853 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 10:45:03.535741   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 10:45:03.543615   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:45:03.561087   32853 start.go:303] post-start completed in 175.005498ms
	I1109 10:45:03.561649   32853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104454
	I1109 10:45:03.619663   32853 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/config.json ...
	I1109 10:45:03.620104   32853 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 10:45:03.620168   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:03.677421   32853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63592 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:45:03.759611   32853 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 10:45:03.764418   32853 start.go:128] duration metric: createHost completed in 8.356967816s
	I1109 10:45:03.764437   32853 start.go:83] releasing machines lock for "kubernetes-upgrade-104454", held for 8.357081458s
	I1109 10:45:03.764533   32853 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104454
	I1109 10:45:03.836873   32853 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1109 10:45:03.836884   32853 ssh_runner.go:195] Run: systemctl --version
	I1109 10:45:03.836950   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:03.836964   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:03.896647   32853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63592 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:45:03.896822   32853 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63592 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:45:03.982379   32853 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 10:45:04.229051   32853 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 10:45:04.229144   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 10:45:04.239041   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 10:45:04.251704   32853 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 10:45:04.329129   32853 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 10:45:04.407518   32853 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:45:04.479247   32853 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 10:45:04.702922   32853 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:45:04.730424   32853 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:45:04.781344   32853 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	I1109 10:45:04.781567   32853 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-104454 dig +short host.docker.internal
	I1109 10:45:04.902274   32853 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 10:45:04.902390   32853 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 10:45:04.906408   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:45:04.916163   32853 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:45:04.972948   32853 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 10:45:04.973041   32853 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 10:45:04.996098   32853 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1109 10:45:04.996123   32853 docker.go:543] Images already preloaded, skipping extraction
	I1109 10:45:04.996239   32853 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 10:45:05.018662   32853 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1109 10:45:05.018681   32853 cache_images.go:84] Images are preloaded, skipping loading
	I1109 10:45:05.018800   32853 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 10:45:05.089168   32853 cni.go:95] Creating CNI manager for ""
	I1109 10:45:05.089183   32853 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 10:45:05.089195   32853 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 10:45:05.089213   32853 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-104454 NodeName:kubernetes-upgrade-104454 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 10:45:05.089331   32853 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "kubernetes-upgrade-104454"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: kubernetes-upgrade-104454
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 10:45:05.089409   32853 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=kubernetes-upgrade-104454 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-104454 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 10:45:05.089487   32853 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1109 10:45:05.096987   32853 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 10:45:05.097054   32853 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 10:45:05.104019   32853 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (351 bytes)
	I1109 10:45:05.116233   32853 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 10:45:05.129209   32853 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1109 10:45:05.142443   32853 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1109 10:45:05.146317   32853 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 10:45:05.155764   32853 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454 for IP: 192.168.67.2
	I1109 10:45:05.155893   32853 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 10:45:05.155968   32853 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 10:45:05.156020   32853 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.key
	I1109 10:45:05.156042   32853 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.crt with IP's: []
	I1109 10:45:05.231779   32853 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.crt ...
	I1109 10:45:05.231789   32853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.crt: {Name:mk473f3b4d7c1b742156aceaec7028edbce61c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:45:05.232101   32853 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.key ...
	I1109 10:45:05.232109   32853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.key: {Name:mk19c5d4cb92e92aad41eec584b57a53635b1ba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:45:05.232301   32853 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.key.c7fa3a9e
	I1109 10:45:05.232323   32853 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1109 10:45:05.343238   32853 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.crt.c7fa3a9e ...
	I1109 10:45:05.343256   32853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.crt.c7fa3a9e: {Name:mkd870d001a8cebeb21707092f614b727181c716 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:45:05.343567   32853 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.key.c7fa3a9e ...
	I1109 10:45:05.343578   32853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.key.c7fa3a9e: {Name:mk59f6588ef009d0c6d596bac1b387925f5081e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:45:05.343760   32853 certs.go:320] copying /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.crt
	I1109 10:45:05.343937   32853 certs.go:324] copying /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.key
	I1109 10:45:05.344121   32853 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.key
	I1109 10:45:05.344142   32853 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.crt with IP's: []
	I1109 10:45:05.638801   32853 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.crt ...
	I1109 10:45:05.638817   32853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.crt: {Name:mk6ebd9b234a26b8aea13f95eab0ea484383b4fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:45:05.639114   32853 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.key ...
	I1109 10:45:05.639123   32853 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.key: {Name:mk156ca6a23f18a72063423a776dcc971f7248da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:45:05.639548   32853 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 10:45:05.639603   32853 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 10:45:05.639618   32853 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 10:45:05.639653   32853 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 10:45:05.639686   32853 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 10:45:05.639721   32853 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 10:45:05.639794   32853 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:45:05.640285   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 10:45:05.659007   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 10:45:05.676344   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 10:45:05.693608   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 10:45:05.710977   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 10:45:05.728463   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 10:45:05.746354   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 10:45:05.763264   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 10:45:05.780034   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 10:45:05.797869   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 10:45:05.814905   32853 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 10:45:05.832150   32853 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 10:45:05.845394   32853 ssh_runner.go:195] Run: openssl version
	I1109 10:45:05.850916   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 10:45:05.858899   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 10:45:05.862775   32853 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:45:05.862830   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 10:45:05.867956   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 10:45:05.875652   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 10:45:05.883404   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:45:05.887119   32853 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:45:05.887169   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:45:05.892546   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 10:45:05.900162   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 10:45:05.907841   32853 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 10:45:05.911853   32853 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:45:05.911910   32853 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 10:45:05.917680   32853 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 10:45:05.925532   32853 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-104454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:kubernetes-upgrade-104454 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_
vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:45:05.925658   32853 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 10:45:05.948608   32853 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 10:45:05.956493   32853 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 10:45:05.963681   32853 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1109 10:45:05.963744   32853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 10:45:05.971148   32853 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 10:45:05.971176   32853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 10:45:06.018556   32853 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1109 10:45:06.018614   32853 kubeadm.go:317] [preflight] Running pre-flight checks
	I1109 10:45:06.311031   32853 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 10:45:06.311121   32853 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 10:45:06.311193   32853 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 10:45:06.534984   32853 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 10:45:06.535723   32853 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 10:45:06.542728   32853 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1109 10:45:06.616077   32853 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 10:45:06.637946   32853 out.go:204]   - Generating certificates and keys ...
	I1109 10:45:06.638034   32853 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1109 10:45:06.638129   32853 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1109 10:45:06.700610   32853 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 10:45:06.824693   32853 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1109 10:45:07.079970   32853 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1109 10:45:07.343934   32853 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1109 10:45:07.411559   32853 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1109 10:45:07.411671   32853 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-104454 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1109 10:45:07.712664   32853 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1109 10:45:07.712786   32853 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-104454 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1109 10:45:07.939898   32853 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 10:45:08.268909   32853 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 10:45:08.443436   32853 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1109 10:45:08.443616   32853 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 10:45:08.636029   32853 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 10:45:08.828247   32853 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 10:45:09.250357   32853 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 10:45:09.381155   32853 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 10:45:09.381663   32853 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 10:45:09.423983   32853 out.go:204]   - Booting up control plane ...
	I1109 10:45:09.424210   32853 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 10:45:09.424351   32853 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 10:45:09.424497   32853 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 10:45:09.424653   32853 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 10:45:09.424895   32853 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 10:45:49.360525   32853 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1109 10:45:49.361100   32853 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:45:49.361264   32853 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:45:54.358171   32853 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:45:54.358353   32853 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:46:04.351446   32853 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:46:04.351648   32853 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:46:24.337482   32853 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:46:24.337617   32853 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:47:04.425767   32853 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:47:04.425923   32853 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:47:04.425946   32853 kubeadm.go:317] 
	I1109 10:47:04.426015   32853 kubeadm.go:317] Unfortunately, an error has occurred:
	I1109 10:47:04.426075   32853 kubeadm.go:317] 	timed out waiting for the condition
	I1109 10:47:04.426094   32853 kubeadm.go:317] 
	I1109 10:47:04.426131   32853 kubeadm.go:317] This error is likely caused by:
	I1109 10:47:04.426166   32853 kubeadm.go:317] 	- The kubelet is not running
	I1109 10:47:04.426250   32853 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 10:47:04.426257   32853 kubeadm.go:317] 
	I1109 10:47:04.426342   32853 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 10:47:04.426370   32853 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1109 10:47:04.426389   32853 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1109 10:47:04.426393   32853 kubeadm.go:317] 
	I1109 10:47:04.426467   32853 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 10:47:04.426533   32853 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1109 10:47:04.426633   32853 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1109 10:47:04.426670   32853 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1109 10:47:04.426749   32853 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1109 10:47:04.426785   32853 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1109 10:47:04.429703   32853 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1109 10:47:04.429820   32853 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1109 10:47:04.429949   32853 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 10:47:04.430030   32853 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 10:47:04.430139   32853 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1109 10:47:04.430327   32853 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-104454 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-104454 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-104454 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-104454 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1109 10:47:04.430361   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1109 10:47:04.854853   32853 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 10:47:04.867186   32853 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1109 10:47:04.867259   32853 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 10:47:04.877604   32853 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 10:47:04.877628   32853 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 10:47:04.935866   32853 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1109 10:47:04.935920   32853 kubeadm.go:317] [preflight] Running pre-flight checks
	I1109 10:47:05.276789   32853 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 10:47:05.276894   32853 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 10:47:05.276985   32853 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 10:47:05.557828   32853 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 10:47:05.561017   32853 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 10:47:05.568602   32853 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1109 10:47:05.640647   32853 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 10:47:05.662215   32853 out.go:204]   - Generating certificates and keys ...
	I1109 10:47:05.662315   32853 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1109 10:47:05.662417   32853 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1109 10:47:05.662573   32853 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1109 10:47:05.662678   32853 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1109 10:47:05.662752   32853 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1109 10:47:05.662811   32853 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1109 10:47:05.662880   32853 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1109 10:47:05.662960   32853 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1109 10:47:05.663036   32853 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1109 10:47:05.663130   32853 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1109 10:47:05.663173   32853 kubeadm.go:317] [certs] Using the existing "sa" key
	I1109 10:47:05.663233   32853 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 10:47:05.726661   32853 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 10:47:05.870694   32853 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 10:47:06.187499   32853 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 10:47:06.582966   32853 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 10:47:06.584408   32853 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 10:47:06.605863   32853 out.go:204]   - Booting up control plane ...
	I1109 10:47:06.605982   32853 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 10:47:06.606076   32853 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 10:47:06.606146   32853 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 10:47:06.606219   32853 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 10:47:06.606344   32853 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 10:47:46.567802   32853 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1109 10:47:46.568424   32853 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:47:46.568561   32853 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:47:51.565701   32853 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:47:51.565902   32853 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:48:01.562137   32853 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:48:01.562319   32853 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:48:21.547985   32853 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:48:21.548171   32853 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:49:01.520860   32853 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 10:49:01.521058   32853 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 10:49:01.521073   32853 kubeadm.go:317] 
	I1109 10:49:01.521116   32853 kubeadm.go:317] Unfortunately, an error has occurred:
	I1109 10:49:01.521156   32853 kubeadm.go:317] 	timed out waiting for the condition
	I1109 10:49:01.521166   32853 kubeadm.go:317] 
	I1109 10:49:01.521202   32853 kubeadm.go:317] This error is likely caused by:
	I1109 10:49:01.521234   32853 kubeadm.go:317] 	- The kubelet is not running
	I1109 10:49:01.521380   32853 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 10:49:01.521392   32853 kubeadm.go:317] 
	I1109 10:49:01.521510   32853 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 10:49:01.521550   32853 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1109 10:49:01.521594   32853 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1109 10:49:01.521602   32853 kubeadm.go:317] 
	I1109 10:49:01.521701   32853 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 10:49:01.521796   32853 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1109 10:49:01.521906   32853 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1109 10:49:01.521960   32853 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1109 10:49:01.522071   32853 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1109 10:49:01.522115   32853 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1109 10:49:01.524484   32853 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1109 10:49:01.524595   32853 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1109 10:49:01.524691   32853 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 10:49:01.524759   32853 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 10:49:01.524824   32853 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1109 10:49:01.524852   32853 kubeadm.go:398] StartCluster complete in 3m55.486136071s
	I1109 10:49:01.524954   32853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 10:49:01.547308   32853 logs.go:274] 0 containers: []
	W1109 10:49:01.547319   32853 logs.go:276] No container was found matching "kube-apiserver"
	I1109 10:49:01.547401   32853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 10:49:01.568845   32853 logs.go:274] 0 containers: []
	W1109 10:49:01.568857   32853 logs.go:276] No container was found matching "etcd"
	I1109 10:49:01.568944   32853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 10:49:01.591260   32853 logs.go:274] 0 containers: []
	W1109 10:49:01.591273   32853 logs.go:276] No container was found matching "coredns"
	I1109 10:49:01.591354   32853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 10:49:01.612855   32853 logs.go:274] 0 containers: []
	W1109 10:49:01.612868   32853 logs.go:276] No container was found matching "kube-scheduler"
	I1109 10:49:01.612954   32853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 10:49:01.634862   32853 logs.go:274] 0 containers: []
	W1109 10:49:01.634877   32853 logs.go:276] No container was found matching "kube-proxy"
	I1109 10:49:01.634973   32853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 10:49:01.658930   32853 logs.go:274] 0 containers: []
	W1109 10:49:01.658942   32853 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 10:49:01.659027   32853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 10:49:01.681138   32853 logs.go:274] 0 containers: []
	W1109 10:49:01.681150   32853 logs.go:276] No container was found matching "storage-provisioner"
	I1109 10:49:01.681232   32853 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 10:49:01.702850   32853 logs.go:274] 0 containers: []
	W1109 10:49:01.702865   32853 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 10:49:01.702875   32853 logs.go:123] Gathering logs for dmesg ...
	I1109 10:49:01.702884   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 10:49:01.716778   32853 logs.go:123] Gathering logs for describe nodes ...
	I1109 10:49:01.716791   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 10:49:01.769470   32853 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 10:49:01.769482   32853 logs.go:123] Gathering logs for Docker ...
	I1109 10:49:01.769489   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 10:49:01.784921   32853 logs.go:123] Gathering logs for container status ...
	I1109 10:49:01.784939   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 10:49:03.833213   32853 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048277766s)
	I1109 10:49:03.833339   32853 logs.go:123] Gathering logs for kubelet ...
	I1109 10:49:03.833349   32853 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1109 10:49:03.872726   32853 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1109 10:49:03.872746   32853 out.go:239] * 
	* 
	W1109 10:49:03.872853   32853 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 10:49:03.872868   32853 out.go:239] * 
	* 
	W1109 10:49:03.873483   32853 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 10:49:03.952984   32853 out.go:177] 
	W1109 10:49:04.027296   32853 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 10:49:04.027451   32853 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1109 10:49:04.027528   32853 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1109 10:49:04.049241   32853 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:231: failed to start minikube HEAD with oldest k8s version: out/minikube-darwin-amd64 start -p kubernetes-upgrade-104454 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-104454
version_upgrade_test.go:234: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-104454: (1.601194146s)
version_upgrade_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-104454 status --format={{.Host}}
version_upgrade_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-104454 status --format={{.Host}}: exit status 7 (129.401509ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:241: status error: exit status 7 (may be ok)
version_upgrade_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-104454 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:250: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-104454 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker : (4m40.324496302s)
version_upgrade_test.go:255: (dbg) Run:  kubectl --context kubernetes-upgrade-104454 version --output=json
version_upgrade_test.go:274: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:276: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-104454 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker 
version_upgrade_test.go:276: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-104454 --memory=2200 --kubernetes-version=v1.16.0 --driver=docker : exit status 106 (520.242142ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-104454] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.25.3 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-104454
	    minikube start -p kubernetes-upgrade-104454 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-1044542 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.25.3, by running:
	    
	    minikube start -p kubernetes-upgrade-104454 --kubernetes-version=v1.25.3
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:280: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-104454 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:282: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-104454 --memory=2200 --kubernetes-version=v1.25.3 --alsologtostderr -v=1 --driver=docker : (37.754670371s)
version_upgrade_test.go:286: *** TestKubernetesUpgrade FAILED at 2022-11-09 10:54:24.54122 -0800 PST m=+3089.449141933
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect kubernetes-upgrade-104454
helpers_test.go:235: (dbg) docker inspect kubernetes-upgrade-104454:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "205ae934f56b0c83be537a42f3b4df220ce277ae9b7ed9706a15c027725d9a3e",
	        "Created": "2022-11-09T18:45:00.699750935Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 178660,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T18:49:07.237553211Z",
	            "FinishedAt": "2022-11-09T18:49:04.638867145Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/205ae934f56b0c83be537a42f3b4df220ce277ae9b7ed9706a15c027725d9a3e/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/205ae934f56b0c83be537a42f3b4df220ce277ae9b7ed9706a15c027725d9a3e/hostname",
	        "HostsPath": "/var/lib/docker/containers/205ae934f56b0c83be537a42f3b4df220ce277ae9b7ed9706a15c027725d9a3e/hosts",
	        "LogPath": "/var/lib/docker/containers/205ae934f56b0c83be537a42f3b4df220ce277ae9b7ed9706a15c027725d9a3e/205ae934f56b0c83be537a42f3b4df220ce277ae9b7ed9706a15c027725d9a3e-json.log",
	        "Name": "/kubernetes-upgrade-104454",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-104454:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-104454",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/74c495198459a4a5c5c6ec21ae93192452e69995840ce9e883970f9aca3559fe-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/74c495198459a4a5c5c6ec21ae93192452e69995840ce9e883970f9aca3559fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/74c495198459a4a5c5c6ec21ae93192452e69995840ce9e883970f9aca3559fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/74c495198459a4a5c5c6ec21ae93192452e69995840ce9e883970f9aca3559fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-104454",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-104454/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-104454",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-104454",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-104454",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "118f05724499b804d0487b720b0543ab236f60b80888d3682cc0ebf8558ea5bd",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63800"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63801"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63802"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63798"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "63799"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/118f05724499",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-104454": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "205ae934f56b",
	                        "kubernetes-upgrade-104454"
	                    ],
	                    "NetworkID": "9173b65eaa9f2d7654b2f3598fc8dfc521f56bf17e934ee1aa8c1749736be8f5",
	                    "EndpointID": "58f36a59820a25aae6f0904c8d9b74336908a3cef76836189a85c86adec6fa1a",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p kubernetes-upgrade-104454 -n kubernetes-upgrade-104454
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-104454 logs -n 25

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p kubernetes-upgrade-104454 logs -n 25: (3.427375781s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p pause-104645                | pause-104645              | jenkins | v1.28.0 | 09 Nov 22 10:49 PST | 09 Nov 22 10:49 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| delete  | -p pause-104645                | pause-104645              | jenkins | v1.28.0 | 09 Nov 22 10:49 PST | 09 Nov 22 10:49 PST |
	|         | --alsologtostderr -v=5         |                           |         |         |                     |                     |
	| profile | list --output json             | minikube                  | jenkins | v1.28.0 | 09 Nov 22 10:49 PST | 09 Nov 22 10:49 PST |
	| delete  | -p pause-104645                | pause-104645              | jenkins | v1.28.0 | 09 Nov 22 10:49 PST | 09 Nov 22 10:49 PST |
	| start   | -p NoKubernetes-104919         | NoKubernetes-104919       | jenkins | v1.28.0 | 09 Nov 22 10:49 PST |                     |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --kubernetes-version=1.20      |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-104919         | NoKubernetes-104919       | jenkins | v1.28.0 | 09 Nov 22 10:49 PST | 09 Nov 22 10:49 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-104919         | NoKubernetes-104919       | jenkins | v1.28.0 | 09 Nov 22 10:49 PST | 09 Nov 22 10:49 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-104919         | NoKubernetes-104919       | jenkins | v1.28.0 | 09 Nov 22 10:49 PST | 09 Nov 22 10:49 PST |
	| start   | -p NoKubernetes-104919         | NoKubernetes-104919       | jenkins | v1.28.0 | 09 Nov 22 10:49 PST | 09 Nov 22 10:50 PST |
	|         | --no-kubernetes                |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-104919 sudo    | NoKubernetes-104919       | jenkins | v1.28.0 | 09 Nov 22 10:50 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| profile | list                           | minikube                  | jenkins | v1.28.0 | 09 Nov 22 10:50 PST | 09 Nov 22 10:50 PST |
	| profile | list --output=json             | minikube                  | jenkins | v1.28.0 | 09 Nov 22 10:50 PST | 09 Nov 22 10:50 PST |
	| stop    | -p NoKubernetes-104919         | NoKubernetes-104919       | jenkins | v1.28.0 | 09 Nov 22 10:50 PST | 09 Nov 22 10:50 PST |
	| start   | -p NoKubernetes-104919         | NoKubernetes-104919       | jenkins | v1.28.0 | 09 Nov 22 10:50 PST | 09 Nov 22 10:50 PST |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-104919 sudo    | NoKubernetes-104919       | jenkins | v1.28.0 | 09 Nov 22 10:50 PST |                     |
	|         | systemctl is-active --quiet    |                           |         |         |                     |                     |
	|         | service kubelet                |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-104919         | NoKubernetes-104919       | jenkins | v1.28.0 | 09 Nov 22 10:50 PST | 09 Nov 22 10:50 PST |
	| start   | -p auto-104027 --memory=2048   | auto-104027               | jenkins | v1.28.0 | 09 Nov 22 10:50 PST | 09 Nov 22 10:51 PST |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| ssh     | -p auto-104027 pgrep -a        | auto-104027               | jenkins | v1.28.0 | 09 Nov 22 10:51 PST | 09 Nov 22 10:51 PST |
	|         | kubelet                        |                           |         |         |                     |                     |
	| delete  | -p auto-104027                 | auto-104027               | jenkins | v1.28.0 | 09 Nov 22 10:51 PST | 09 Nov 22 10:51 PST |
	| start   | -p kindnet-104027              | kindnet-104027            | jenkins | v1.28.0 | 09 Nov 22 10:51 PST | 09 Nov 22 10:52 PST |
	|         | --memory=2048                  |                           |         |         |                     |                     |
	|         | --alsologtostderr              |                           |         |         |                     |                     |
	|         | --wait=true --wait-timeout=5m  |                           |         |         |                     |                     |
	|         | --cni=kindnet --driver=docker  |                           |         |         |                     |                     |
	| ssh     | -p kindnet-104027 pgrep -a     | kindnet-104027            | jenkins | v1.28.0 | 09 Nov 22 10:52 PST | 09 Nov 22 10:52 PST |
	|         | kubelet                        |                           |         |         |                     |                     |
	| delete  | -p kindnet-104027              | kindnet-104027            | jenkins | v1.28.0 | 09 Nov 22 10:52 PST | 09 Nov 22 10:52 PST |
	| start   | -p cilium-104028 --memory=2048 | cilium-104028             | jenkins | v1.28.0 | 09 Nov 22 10:52 PST | 09 Nov 22 10:54 PST |
	|         | --alsologtostderr --wait=true  |                           |         |         |                     |                     |
	|         | --wait-timeout=5m --cni=cilium |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-104454   | kubernetes-upgrade-104454 | jenkins | v1.28.0 | 09 Nov 22 10:53 PST |                     |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-104454   | kubernetes-upgrade-104454 | jenkins | v1.28.0 | 09 Nov 22 10:53 PST | 09 Nov 22 10:54 PST |
	|         | --memory=2200                  |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                           |         |         |                     |                     |
	|         | --driver=docker                |                           |         |         |                     |                     |
	|---------|--------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/09 10:53:46
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 10:53:46.841735   35087 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:53:46.841925   35087 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:53:46.841930   35087 out.go:309] Setting ErrFile to fd 2...
	I1109 10:53:46.841934   35087 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:53:46.842056   35087 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:53:46.842579   35087 out.go:303] Setting JSON to false
	I1109 10:53:46.862519   35087 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":14001,"bootTime":1668006025,"procs":385,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 10:53:46.862627   35087 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 10:53:46.883795   35087 out.go:177] * [kubernetes-upgrade-104454] minikube v1.28.0 on Darwin 13.0
	I1109 10:53:46.920850   35087 notify.go:220] Checking for updates...
	I1109 10:53:46.957624   35087 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 10:53:46.999753   35087 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:53:47.021155   35087 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 10:53:47.042676   35087 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 10:53:47.064079   35087 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 10:53:42.561359   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:53:45.063602   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:53:47.101438   35087 config.go:180] Loaded profile config "kubernetes-upgrade-104454": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:53:47.101823   35087 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 10:53:47.166146   35087 docker.go:137] docker version: linux-20.10.20
	I1109 10:53:47.166287   35087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:53:47.330735   35087 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:false NGoroutines:58 SystemTime:2022-11-09 18:53:47.23528659 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:53:47.372674   35087 out.go:177] * Using the docker driver based on existing profile
	I1109 10:53:47.395487   35087 start.go:282] selected driver: docker
	I1109 10:53:47.395512   35087 start.go:808] validating driver "docker" against &{Name:kubernetes-upgrade-104454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-104454 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custo
mQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:53:47.395619   35087 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 10:53:47.398458   35087 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:53:47.552422   35087 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:76 OomKillDisable:false NGoroutines:58 SystemTime:2022-11-09 18:53:47.451533857 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:53:47.552578   35087 cni.go:95] Creating CNI manager for ""
	I1109 10:53:47.552595   35087 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 10:53:47.552610   35087 start_flags.go:317] config:
	{Name:kubernetes-upgrade-104454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-104454 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_clie
nt SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:53:47.594847   35087 out.go:177] * Starting control plane node kubernetes-upgrade-104454 in cluster kubernetes-upgrade-104454
	I1109 10:53:47.615625   35087 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 10:53:47.636795   35087 out.go:177] * Pulling base image ...
	I1109 10:53:47.657842   35087 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 10:53:47.657854   35087 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 10:53:47.657916   35087 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1109 10:53:47.657928   35087 cache.go:57] Caching tarball of preloaded images
	I1109 10:53:47.658060   35087 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 10:53:47.658069   35087 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1109 10:53:47.658613   35087 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/config.json ...
	I1109 10:53:47.713668   35087 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 10:53:47.713689   35087 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 10:53:47.713698   35087 cache.go:208] Successfully downloaded all kic artifacts
	I1109 10:53:47.713759   35087 start.go:364] acquiring machines lock for kubernetes-upgrade-104454: {Name:mk1c8e548782a85bd1897e2b49ce600df1a310a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 10:53:47.713858   35087 start.go:368] acquired machines lock for "kubernetes-upgrade-104454" in 77.354µs
	I1109 10:53:47.713886   35087 start.go:96] Skipping create...Using existing machine configuration
	I1109 10:53:47.713895   35087 fix.go:55] fixHost starting: 
	I1109 10:53:47.714157   35087 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104454 --format={{.State.Status}}
	I1109 10:53:47.776223   35087 fix.go:103] recreateIfNeeded on kubernetes-upgrade-104454: state=Running err=<nil>
	W1109 10:53:47.776266   35087 fix.go:129] unexpected machine state, will restart: <nil>
	I1109 10:53:47.818840   35087 out.go:177] * Updating the running docker "kubernetes-upgrade-104454" container ...
	I1109 10:53:47.840587   35087 machine.go:88] provisioning docker machine ...
	I1109 10:53:47.840635   35087 ubuntu.go:169] provisioning hostname "kubernetes-upgrade-104454"
	I1109 10:53:47.840740   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:47.898807   35087 main.go:134] libmachine: Using SSH client type: native
	I1109 10:53:47.899005   35087 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 63800 <nil> <nil>}
	I1109 10:53:47.899018   35087 main.go:134] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-104454 && echo "kubernetes-upgrade-104454" | sudo tee /etc/hostname
	I1109 10:53:48.024840   35087 main.go:134] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-104454
	
	I1109 10:53:48.024956   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:48.085931   35087 main.go:134] libmachine: Using SSH client type: native
	I1109 10:53:48.086096   35087 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 63800 <nil> <nil>}
	I1109 10:53:48.086109   35087 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-104454' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-104454/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-104454' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 10:53:48.204192   35087 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:53:48.204210   35087 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 10:53:48.204232   35087 ubuntu.go:177] setting up certificates
	I1109 10:53:48.204247   35087 provision.go:83] configureAuth start
	I1109 10:53:48.204349   35087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104454
	I1109 10:53:48.262356   35087 provision.go:138] copyHostCerts
	I1109 10:53:48.262466   35087 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 10:53:48.262476   35087 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 10:53:48.262610   35087 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 10:53:48.262824   35087 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 10:53:48.262830   35087 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 10:53:48.262889   35087 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 10:53:48.263034   35087 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 10:53:48.263040   35087 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 10:53:48.263101   35087 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 10:53:48.263217   35087 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-104454 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube kubernetes-upgrade-104454]
	I1109 10:53:48.411939   35087 provision.go:172] copyRemoteCerts
	I1109 10:53:48.412023   35087 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 10:53:48.412095   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:48.476213   35087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63800 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:53:48.563848   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 10:53:48.583203   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 10:53:48.601727   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1109 10:53:48.621559   35087 provision.go:86] duration metric: configureAuth took 417.303901ms
	I1109 10:53:48.621573   35087 ubuntu.go:193] setting minikube options for container-runtime
	I1109 10:53:48.621730   35087 config.go:180] Loaded profile config "kubernetes-upgrade-104454": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:53:48.621808   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:48.685475   35087 main.go:134] libmachine: Using SSH client type: native
	I1109 10:53:48.685647   35087 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 63800 <nil> <nil>}
	I1109 10:53:48.685657   35087 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 10:53:48.802343   35087 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 10:53:48.802363   35087 ubuntu.go:71] root file system type: overlay
	I1109 10:53:48.802506   35087 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 10:53:48.802605   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:48.862876   35087 main.go:134] libmachine: Using SSH client type: native
	I1109 10:53:48.863080   35087 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 63800 <nil> <nil>}
	I1109 10:53:48.863137   35087 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 10:53:48.990450   35087 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 10:53:48.990576   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:49.051400   35087 main.go:134] libmachine: Using SSH client type: native
	I1109 10:53:49.051588   35087 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 63800 <nil> <nil>}
	I1109 10:53:49.051602   35087 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 10:53:49.177228   35087 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 10:53:49.177243   35087 machine.go:91] provisioned docker machine in 1.33665468s
	I1109 10:53:49.177262   35087 start.go:300] post-start starting for "kubernetes-upgrade-104454" (driver="docker")
	I1109 10:53:49.177267   35087 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 10:53:49.177340   35087 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 10:53:49.177404   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:49.235255   35087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63800 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:53:49.318613   35087 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 10:53:49.322588   35087 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 10:53:49.322604   35087 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 10:53:49.322611   35087 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 10:53:49.322616   35087 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 10:53:49.322626   35087 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 10:53:49.322719   35087 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 10:53:49.322883   35087 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 10:53:49.323067   35087 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 10:53:49.332901   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:53:49.353360   35087 start.go:303] post-start completed in 176.090814ms
	I1109 10:53:49.353453   35087 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 10:53:49.353522   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:49.413168   35087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63800 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:53:49.496026   35087 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 10:53:49.500857   35087 fix.go:57] fixHost completed within 1.786972871s
	I1109 10:53:49.500876   35087 start.go:83] releasing machines lock for "kubernetes-upgrade-104454", held for 1.787025944s
	I1109 10:53:49.501001   35087 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-104454
	I1109 10:53:49.564410   35087 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 10:53:49.564418   35087 ssh_runner.go:195] Run: systemctl --version
	I1109 10:53:49.564502   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:49.564504   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:49.629036   35087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63800 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:53:49.629078   35087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63800 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:53:49.775681   35087 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 10:53:49.786920   35087 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 10:53:49.787020   35087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 10:53:49.796856   35087 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 10:53:49.810590   35087 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 10:53:49.903839   35087 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 10:53:50.004608   35087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:53:50.096139   35087 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 10:53:47.559841   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:53:49.560420   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:53:51.561988   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:53:54.982065   35087 ssh_runner.go:235] Completed: sudo systemctl restart docker: (4.885943858s)
	I1109 10:53:54.982153   35087 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 10:53:55.066862   35087 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 10:53:55.172168   35087 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1109 10:53:55.194447   35087 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 10:53:55.194581   35087 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 10:53:55.204371   35087 start.go:472] Will wait 60s for crictl version
	I1109 10:53:55.204453   35087 ssh_runner.go:195] Run: sudo crictl version
	I1109 10:53:55.257098   35087 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1109 10:53:55.257238   35087 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:53:55.342217   35087 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 10:53:55.448952   35087 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1109 10:53:55.449086   35087 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-104454 dig +short host.docker.internal
	I1109 10:53:55.556414   35087 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 10:53:55.556537   35087 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 10:53:55.563160   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:55.623098   35087 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 10:53:55.623194   35087 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 10:53:55.648940   35087 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1109 10:53:55.648958   35087 docker.go:543] Images already preloaded, skipping extraction
	I1109 10:53:55.649051   35087 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 10:53:55.675958   35087 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1109 10:53:55.675982   35087 cache_images.go:84] Images are preloaded, skipping loading
	I1109 10:53:55.676094   35087 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 10:53:55.754720   35087 cni.go:95] Creating CNI manager for ""
	I1109 10:53:55.754737   35087 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 10:53:55.754755   35087 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 10:53:55.754781   35087 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-104454 NodeName:kubernetes-upgrade-104454 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt S
taticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 10:53:55.754951   35087 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-104454"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 10:53:55.755076   35087 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=kubernetes-upgrade-104454 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-104454 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 10:53:55.755160   35087 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1109 10:53:55.765860   35087 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 10:53:55.765949   35087 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 10:53:55.773606   35087 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (487 bytes)
	I1109 10:53:55.787899   35087 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 10:53:55.802583   35087 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2047 bytes)
	I1109 10:53:55.824515   35087 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1109 10:53:55.828781   35087 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454 for IP: 192.168.67.2
	I1109 10:53:55.828893   35087 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 10:53:55.828950   35087 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 10:53:55.829045   35087 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.key
	I1109 10:53:55.829129   35087 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.key.c7fa3a9e
	I1109 10:53:55.829191   35087 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.key
	I1109 10:53:55.829437   35087 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 10:53:55.829501   35087 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 10:53:55.829518   35087 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 10:53:55.829557   35087 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 10:53:55.829594   35087 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 10:53:55.829630   35087 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 10:53:55.829707   35087 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 10:53:55.830255   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 10:53:55.847627   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 10:53:55.865604   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 10:53:55.885865   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 10:53:55.903278   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 10:53:55.921413   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 10:53:55.938717   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 10:53:55.956108   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 10:53:55.973164   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 10:53:55.991043   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 10:53:56.008440   35087 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 10:53:56.025877   35087 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 10:53:56.043858   35087 ssh_runner.go:195] Run: openssl version
	I1109 10:53:56.053345   35087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 10:53:56.067774   35087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 10:53:56.073497   35087 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 10:53:56.073594   35087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 10:53:56.080276   35087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 10:53:56.088820   35087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 10:53:56.097687   35087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 10:53:56.102362   35087 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 10:53:56.102416   35087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 10:53:56.109100   35087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 10:53:56.116869   35087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 10:53:56.124701   35087 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:53:56.128663   35087 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:53:56.128725   35087 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 10:53:56.133796   35087 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 10:53:56.141043   35087 kubeadm.go:396] StartCluster: {Name:kubernetes-upgrade-104454 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:kubernetes-upgrade-104454 Namespace:default APIServerName:minikubeCA APIServerNames:[] APISe
rverIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:53:56.141153   35087 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 10:53:56.172840   35087 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 10:53:56.180954   35087 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1109 10:53:56.180969   35087 kubeadm.go:627] restartCluster start
	I1109 10:53:56.181031   35087 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 10:53:56.188185   35087 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:53:56.188272   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:53:56.247196   35087 kubeconfig.go:92] found "kubernetes-upgrade-104454" server: "https://127.0.0.1:63799"
	I1109 10:53:56.247966   35087 kapi.go:59] client config for kubernetes-upgrade-104454: &rest.Config{Host:"https://127.0.0.1:63799", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:53:56.248513   35087 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 10:53:56.256324   35087 api_server.go:165] Checking apiserver status ...
	I1109 10:53:56.256392   35087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:53:56.267261   35087 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12739/cgroup
	W1109 10:53:56.277281   35087 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12739/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:53:56.277356   35087 ssh_runner.go:195] Run: ls
	I1109 10:53:56.281691   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:53:54.059073   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:53:56.560596   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:54:01.283057   35087 api_server.go:268] stopped: https://127.0.0.1:63799/healthz: Get "https://127.0.0.1:63799/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 10:54:01.283124   35087 retry.go:31] will retry after 263.082536ms: state is "Stopped"
	I1109 10:54:01.548320   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:53:59.062241   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:54:01.558228   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:54:06.549552   35087 api_server.go:268] stopped: https://127.0.0.1:63799/healthz: Get "https://127.0.0.1:63799/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 10:54:06.549591   35087 retry.go:31] will retry after 381.329545ms: state is "Stopped"
	I1109 10:54:03.558361   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:54:05.561240   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:54:06.932972   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:08.059986   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:54:10.061374   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:54:12.061709   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:54:11.933520   35087 api_server.go:268] stopped: https://127.0.0.1:63799/healthz: Get "https://127.0.0.1:63799/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1109 10:54:12.135781   35087 api_server.go:165] Checking apiserver status ...
	I1109 10:54:12.135926   35087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:54:12.147626   35087 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/12739/cgroup
	W1109 10:54:12.157137   35087 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/12739/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:54:12.157215   35087 ssh_runner.go:195] Run: ls
	I1109 10:54:12.162148   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:14.385472   35087 api_server.go:278] https://127.0.0.1:63799/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 10:54:14.385488   35087 retry.go:31] will retry after 242.214273ms: https://127.0.0.1:63799/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 10:54:14.627875   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:14.635000   35087 api_server.go:278] https://127.0.0.1:63799/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:54:14.635019   35087 retry.go:31] will retry after 300.724609ms: https://127.0.0.1:63799/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:54:14.935815   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:14.943306   35087 api_server.go:278] https://127.0.0.1:63799/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:54:14.943324   35087 retry.go:31] will retry after 427.113882ms: https://127.0.0.1:63799/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:54:15.370645   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:15.377448   35087 api_server.go:278] https://127.0.0.1:63799/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:54:15.377470   35087 retry.go:31] will retry after 382.2356ms: https://127.0.0.1:63799/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:54:15.760224   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:15.766942   35087 api_server.go:278] https://127.0.0.1:63799/healthz returned 200:
	ok
	I1109 10:54:15.777538   35087 system_pods.go:86] 5 kube-system pods found
	I1109 10:54:15.777556   35087 system_pods.go:89] "etcd-kubernetes-upgrade-104454" [7a03416b-e19e-4187-a3ea-fd5adcef5241] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 10:54:15.777562   35087 system_pods.go:89] "kube-apiserver-kubernetes-upgrade-104454" [113aad4e-1233-48c8-b02e-dc79261594ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 10:54:15.777575   35087 system_pods.go:89] "kube-controller-manager-kubernetes-upgrade-104454" [0d612fac-e85b-4b27-a400-7e21f49ca7b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 10:54:15.777584   35087 system_pods.go:89] "kube-scheduler-kubernetes-upgrade-104454" [5a42c351-c7eb-44c7-9d96-5b7b93af8c87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 10:54:15.777589   35087 system_pods.go:89] "storage-provisioner" [85fd823a-a176-49a9-bb68-579c48faad29] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 10:54:15.777597   35087 kubeadm.go:611] needs reconfigure: missing components: kube-dns, kube-proxy
	I1109 10:54:15.777604   35087 kubeadm.go:1114] stopping kube-system containers ...
	I1109 10:54:15.777685   35087 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 10:54:15.802825   35087 docker.go:444] Stopping containers: [7d87dfbeabf6 a62adef85dbd a9b6924ff0e3 eb694d87f5ed 42b377adcd15 662f692e9a57 9b83a1a3df6b 4b02ca893163 1f2f43e3e00f c164519219aa 8f2ad5c9bc22 ae45d383399a 18b9663be8ac 95bc23953f61 b34651e2bef5 3814fbe859d7 06989f3f0bb6 1bea76f5f4cd]
	I1109 10:54:15.802932   35087 ssh_runner.go:195] Run: docker stop 7d87dfbeabf6 a62adef85dbd a9b6924ff0e3 eb694d87f5ed 42b377adcd15 662f692e9a57 9b83a1a3df6b 4b02ca893163 1f2f43e3e00f c164519219aa 8f2ad5c9bc22 ae45d383399a 18b9663be8ac 95bc23953f61 b34651e2bef5 3814fbe859d7 06989f3f0bb6 1bea76f5f4cd
	I1109 10:54:16.409879   35087 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1109 10:54:16.450303   35087 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 10:54:16.458596   35087 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5643 Nov  9 18:53 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  9 18:53 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Nov  9 18:53 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  9 18:53 /etc/kubernetes/scheduler.conf
	
	I1109 10:54:16.458675   35087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 10:54:16.466734   35087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 10:54:16.511740   35087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 10:54:16.521208   35087 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:54:16.521280   35087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 10:54:16.529530   35087 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 10:54:16.537413   35087 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:54:16.537486   35087 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 10:54:16.547872   35087 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 10:54:16.557346   35087 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1109 10:54:16.557361   35087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:54:16.604501   35087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:54:14.560118   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:54:17.058277   34901 pod_ready.go:102] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"False"
	I1109 10:54:18.558460   34901 pod_ready.go:92] pod "cilium-wqpb8" in "kube-system" namespace has status "Ready":"True"
	I1109 10:54:18.558475   34901 pod_ready.go:81] duration metric: took 51.512925714s waiting for pod "cilium-wqpb8" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.558482   34901 pod_ready.go:78] waiting up to 5m0s for pod "coredns-565d847f94-t7q9z" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.563220   34901 pod_ready.go:92] pod "coredns-565d847f94-t7q9z" in "kube-system" namespace has status "Ready":"True"
	I1109 10:54:18.563229   34901 pod_ready.go:81] duration metric: took 4.742792ms waiting for pod "coredns-565d847f94-t7q9z" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.563235   34901 pod_ready.go:78] waiting up to 5m0s for pod "etcd-cilium-104028" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.567209   34901 pod_ready.go:92] pod "etcd-cilium-104028" in "kube-system" namespace has status "Ready":"True"
	I1109 10:54:18.567218   34901 pod_ready.go:81] duration metric: took 3.978221ms waiting for pod "etcd-cilium-104028" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.567224   34901 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-cilium-104028" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.571420   34901 pod_ready.go:92] pod "kube-apiserver-cilium-104028" in "kube-system" namespace has status "Ready":"True"
	I1109 10:54:18.571428   34901 pod_ready.go:81] duration metric: took 4.199579ms waiting for pod "kube-apiserver-cilium-104028" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.571434   34901 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-cilium-104028" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.575924   34901 pod_ready.go:92] pod "kube-controller-manager-cilium-104028" in "kube-system" namespace has status "Ready":"True"
	I1109 10:54:18.575931   34901 pod_ready.go:81] duration metric: took 4.492642ms waiting for pod "kube-controller-manager-cilium-104028" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.575937   34901 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-jd9xt" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.956522   34901 pod_ready.go:92] pod "kube-proxy-jd9xt" in "kube-system" namespace has status "Ready":"True"
	I1109 10:54:18.956532   34901 pod_ready.go:81] duration metric: took 380.594065ms waiting for pod "kube-proxy-jd9xt" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:18.956538   34901 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-cilium-104028" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:19.355517   34901 pod_ready.go:92] pod "kube-scheduler-cilium-104028" in "kube-system" namespace has status "Ready":"True"
	I1109 10:54:19.355528   34901 pod_ready.go:81] duration metric: took 398.98814ms waiting for pod "kube-scheduler-cilium-104028" in "kube-system" namespace to be "Ready" ...
	I1109 10:54:19.355534   34901 pod_ready.go:38] duration metric: took 56.374555855s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1109 10:54:19.355557   34901 api_server.go:51] waiting for apiserver process to appear ...
	I1109 10:54:19.355619   34901 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:54:19.367270   34901 api_server.go:71] duration metric: took 56.667719817s to wait for apiserver process to appear ...
	I1109 10:54:19.367288   34901 api_server.go:87] waiting for apiserver healthz status ...
	I1109 10:54:19.367297   34901 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:64278/healthz ...
	I1109 10:54:19.372949   34901 api_server.go:278] https://127.0.0.1:64278/healthz returned 200:
	ok
	I1109 10:54:19.374361   34901 api_server.go:140] control plane version: v1.25.3
	I1109 10:54:19.374371   34901 api_server.go:130] duration metric: took 7.078607ms to wait for apiserver health ...
	I1109 10:54:19.374378   34901 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 10:54:19.560092   34901 system_pods.go:59] 9 kube-system pods found
	I1109 10:54:19.560108   34901 system_pods.go:61] "cilium-operator-656749584-zvrg8" [e0837c91-145b-43b5-9e72-b5e57709cbf0] Running
	I1109 10:54:19.560112   34901 system_pods.go:61] "cilium-wqpb8" [1e726c89-8cae-4b06-9262-0d93d94cb50f] Running
	I1109 10:54:19.560116   34901 system_pods.go:61] "coredns-565d847f94-t7q9z" [8c6eaafa-aed8-4053-b232-feacb4e922d8] Running
	I1109 10:54:19.560119   34901 system_pods.go:61] "etcd-cilium-104028" [6bc73198-b9c3-4b5b-b019-a9e73c89aa8d] Running
	I1109 10:54:19.560122   34901 system_pods.go:61] "kube-apiserver-cilium-104028" [552d9594-6cf3-4fa1-b73a-b77bf91d31f7] Running
	I1109 10:54:19.560127   34901 system_pods.go:61] "kube-controller-manager-cilium-104028" [68f96ad4-af9e-49f4-8ab1-e0edffcf61a1] Running
	I1109 10:54:19.560131   34901 system_pods.go:61] "kube-proxy-jd9xt" [ee074288-4444-4a36-9b38-4d6c8370897d] Running
	I1109 10:54:19.560135   34901 system_pods.go:61] "kube-scheduler-cilium-104028" [1bb8cd8f-018e-4dfd-9b9a-7de12ebfd955] Running
	I1109 10:54:19.560139   34901 system_pods.go:61] "storage-provisioner" [ba61deb5-35c5-429e-b6d3-3b4024c222e6] Running
	I1109 10:54:19.560142   34901 system_pods.go:74] duration metric: took 185.760019ms to wait for pod list to return data ...
	I1109 10:54:19.560148   34901 default_sa.go:34] waiting for default service account to be created ...
	I1109 10:54:19.758760   34901 default_sa.go:45] found service account: "default"
	I1109 10:54:19.758771   34901 default_sa.go:55] duration metric: took 198.621198ms for default service account to be created ...
	I1109 10:54:19.758778   34901 system_pods.go:116] waiting for k8s-apps to be running ...
	I1109 10:54:19.961801   34901 system_pods.go:86] 9 kube-system pods found
	I1109 10:54:19.961816   34901 system_pods.go:89] "cilium-operator-656749584-zvrg8" [e0837c91-145b-43b5-9e72-b5e57709cbf0] Running
	I1109 10:54:19.961820   34901 system_pods.go:89] "cilium-wqpb8" [1e726c89-8cae-4b06-9262-0d93d94cb50f] Running
	I1109 10:54:19.961824   34901 system_pods.go:89] "coredns-565d847f94-t7q9z" [8c6eaafa-aed8-4053-b232-feacb4e922d8] Running
	I1109 10:54:19.961834   34901 system_pods.go:89] "etcd-cilium-104028" [6bc73198-b9c3-4b5b-b019-a9e73c89aa8d] Running
	I1109 10:54:19.961838   34901 system_pods.go:89] "kube-apiserver-cilium-104028" [552d9594-6cf3-4fa1-b73a-b77bf91d31f7] Running
	I1109 10:54:19.961841   34901 system_pods.go:89] "kube-controller-manager-cilium-104028" [68f96ad4-af9e-49f4-8ab1-e0edffcf61a1] Running
	I1109 10:54:19.961845   34901 system_pods.go:89] "kube-proxy-jd9xt" [ee074288-4444-4a36-9b38-4d6c8370897d] Running
	I1109 10:54:19.961848   34901 system_pods.go:89] "kube-scheduler-cilium-104028" [1bb8cd8f-018e-4dfd-9b9a-7de12ebfd955] Running
	I1109 10:54:19.961851   34901 system_pods.go:89] "storage-provisioner" [ba61deb5-35c5-429e-b6d3-3b4024c222e6] Running
	I1109 10:54:19.961856   34901 system_pods.go:126] duration metric: took 203.075422ms to wait for k8s-apps to be running ...
	I1109 10:54:19.961860   34901 system_svc.go:44] waiting for kubelet service to be running ....
	I1109 10:54:19.961921   34901 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 10:54:19.971646   34901 system_svc.go:56] duration metric: took 9.781201ms WaitForService to wait for kubelet.
	I1109 10:54:19.971657   34901 kubeadm.go:573] duration metric: took 57.272114302s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I1109 10:54:19.971678   34901 node_conditions.go:102] verifying NodePressure condition ...
	I1109 10:54:20.155623   34901 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:54:20.155640   34901 node_conditions.go:123] node cpu capacity is 6
	I1109 10:54:20.155651   34901 node_conditions.go:105] duration metric: took 183.970152ms to run NodePressure ...
	I1109 10:54:20.155658   34901 start.go:217] waiting for startup goroutines ...
	I1109 10:54:20.155986   34901 ssh_runner.go:195] Run: rm -f paused
	I1109 10:54:20.195525   34901 start.go:506] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I1109 10:54:20.218609   34901 out.go:177] * Done! kubectl is now configured to use "cilium-104028" cluster and "default" namespace by default
	I1109 10:54:17.458921   35087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:54:17.603204   35087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:54:17.667636   35087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:54:17.741989   35087 api_server.go:51] waiting for apiserver process to appear ...
	I1109 10:54:17.742082   35087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:54:18.300365   35087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:54:18.801308   35087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:54:18.812789   35087 api_server.go:71] duration metric: took 1.070811138s to wait for apiserver process to appear ...
	I1109 10:54:18.812800   35087 api_server.go:87] waiting for apiserver healthz status ...
	I1109 10:54:18.812807   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:21.708128   35087 api_server.go:278] https://127.0.0.1:63799/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1109 10:54:21.708146   35087 api_server.go:102] status: https://127.0.0.1:63799/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1109 10:54:22.208400   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:22.214913   35087 api_server.go:278] https://127.0.0.1:63799/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 10:54:22.214927   35087 api_server.go:102] status: https://127.0.0.1:63799/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:54:22.708338   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:22.714312   35087 api_server.go:278] https://127.0.0.1:63799/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 10:54:22.714332   35087 api_server.go:102] status: https://127.0.0.1:63799/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 10:54:23.209124   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:23.216557   35087 api_server.go:278] https://127.0.0.1:63799/healthz returned 200:
	ok
	I1109 10:54:23.223171   35087 api_server.go:140] control plane version: v1.25.3
	I1109 10:54:23.223182   35087 api_server.go:130] duration metric: took 4.410417783s to wait for apiserver health ...
	I1109 10:54:23.223187   35087 cni.go:95] Creating CNI manager for ""
	I1109 10:54:23.223192   35087 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 10:54:23.223196   35087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 10:54:23.227827   35087 system_pods.go:59] 5 kube-system pods found
	I1109 10:54:23.227840   35087 system_pods.go:61] "etcd-kubernetes-upgrade-104454" [7a03416b-e19e-4187-a3ea-fd5adcef5241] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 10:54:23.227847   35087 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-104454" [113aad4e-1233-48c8-b02e-dc79261594ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 10:54:23.227853   35087 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-104454" [0d612fac-e85b-4b27-a400-7e21f49ca7b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 10:54:23.227860   35087 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-104454" [5a42c351-c7eb-44c7-9d96-5b7b93af8c87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 10:54:23.227867   35087 system_pods.go:61] "storage-provisioner" [85fd823a-a176-49a9-bb68-579c48faad29] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 10:54:23.227871   35087 system_pods.go:74] duration metric: took 4.672474ms to wait for pod list to return data ...
	I1109 10:54:23.227878   35087 node_conditions.go:102] verifying NodePressure condition ...
	I1109 10:54:23.230355   35087 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:54:23.230367   35087 node_conditions.go:123] node cpu capacity is 6
	I1109 10:54:23.230378   35087 node_conditions.go:105] duration metric: took 2.49426ms to run NodePressure ...
	I1109 10:54:23.230389   35087 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 10:54:23.345562   35087 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 10:54:23.352935   35087 ops.go:34] apiserver oom_adj: -16
	I1109 10:54:23.352945   35087 kubeadm.go:631] restartCluster took 27.172220044s
	I1109 10:54:23.352951   35087 kubeadm.go:398] StartCluster complete in 27.212164883s
	I1109 10:54:23.352965   35087 settings.go:142] acquiring lock: {Name:mke93232301b59b22d43a378e933baa222d3feda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:54:23.353047   35087 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:54:23.353706   35087 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/kubeconfig: {Name:mk02bb1c68cad934afd737965b2dbda8f5a4ba2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:54:23.354352   35087 kapi.go:59] client config for kubernetes-upgrade-104454: &rest.Config{Host:"https://127.0.0.1:63799", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:54:23.356932   35087 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubernetes-upgrade-104454" rescaled to 1
	I1109 10:54:23.356962   35087 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1109 10:54:23.356974   35087 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 10:54:23.356998   35087 addons.go:486] enableAddons start: toEnable=map[default-storageclass:true storage-provisioner:true], additional=[]
	I1109 10:54:23.357122   35087 config.go:180] Loaded profile config "kubernetes-upgrade-104454": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:54:23.399343   35087 out.go:177] * Verifying Kubernetes components...
	I1109 10:54:23.399430   35087 addons.go:65] Setting storage-provisioner=true in profile "kubernetes-upgrade-104454"
	I1109 10:54:23.399439   35087 addons.go:65] Setting default-storageclass=true in profile "kubernetes-upgrade-104454"
	I1109 10:54:23.420166   35087 addons.go:227] Setting addon storage-provisioner=true in "kubernetes-upgrade-104454"
	W1109 10:54:23.420174   35087 addons.go:236] addon storage-provisioner should already be in state true
	I1109 10:54:23.418268   35087 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1109 10:54:23.420176   35087 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-104454"
	I1109 10:54:23.420181   35087 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 10:54:23.420226   35087 host.go:66] Checking if "kubernetes-upgrade-104454" exists ...
	I1109 10:54:23.420463   35087 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104454 --format={{.State.Status}}
	I1109 10:54:23.420570   35087 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104454 --format={{.State.Status}}
	I1109 10:54:23.431729   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:54:23.485075   35087 kapi.go:59] client config for kubernetes-upgrade-104454: &rest.Config{Host:"https://127.0.0.1:63799", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.crt", KeyFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubernetes-upgrade-104454/client.key", CAFile:"/Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil)
, CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x23463c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1109 10:54:23.490987   35087 addons.go:227] Setting addon default-storageclass=true in "kubernetes-upgrade-104454"
	I1109 10:54:23.507085   35087 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1109 10:54:23.507112   35087 addons.go:236] addon default-storageclass should already be in state true
	I1109 10:54:23.507189   35087 host.go:66] Checking if "kubernetes-upgrade-104454" exists ...
	I1109 10:54:23.544819   35087 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 10:54:23.544834   35087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 10:54:23.544929   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:54:23.545945   35087 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-104454 --format={{.State.Status}}
	I1109 10:54:23.558781   35087 api_server.go:51] waiting for apiserver process to appear ...
	I1109 10:54:23.558869   35087 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:54:23.569371   35087 api_server.go:71] duration metric: took 212.395239ms to wait for apiserver process to appear ...
	I1109 10:54:23.569386   35087 api_server.go:87] waiting for apiserver healthz status ...
	I1109 10:54:23.569394   35087 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:63799/healthz ...
	I1109 10:54:23.575452   35087 api_server.go:278] https://127.0.0.1:63799/healthz returned 200:
	ok
	I1109 10:54:23.577214   35087 api_server.go:140] control plane version: v1.25.3
	I1109 10:54:23.577226   35087 api_server.go:130] duration metric: took 7.835108ms to wait for apiserver health ...
	I1109 10:54:23.577232   35087 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 10:54:23.582164   35087 system_pods.go:59] 5 kube-system pods found
	I1109 10:54:23.582184   35087 system_pods.go:61] "etcd-kubernetes-upgrade-104454" [7a03416b-e19e-4187-a3ea-fd5adcef5241] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 10:54:23.582191   35087 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-104454" [113aad4e-1233-48c8-b02e-dc79261594ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1109 10:54:23.582205   35087 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-104454" [0d612fac-e85b-4b27-a400-7e21f49ca7b6] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1109 10:54:23.582212   35087 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-104454" [5a42c351-c7eb-44c7-9d96-5b7b93af8c87] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 10:54:23.582216   35087 system_pods.go:61] "storage-provisioner" [85fd823a-a176-49a9-bb68-579c48faad29] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1109 10:54:23.582221   35087 system_pods.go:74] duration metric: took 4.985968ms to wait for pod list to return data ...
	I1109 10:54:23.582227   35087 kubeadm.go:573] duration metric: took 225.255548ms to wait for : map[apiserver:true system_pods:true] ...
	I1109 10:54:23.582237   35087 node_conditions.go:102] verifying NodePressure condition ...
	I1109 10:54:23.586004   35087 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 10:54:23.586022   35087 node_conditions.go:123] node cpu capacity is 6
	I1109 10:54:23.586034   35087 node_conditions.go:105] duration metric: took 3.792985ms to run NodePressure ...
	I1109 10:54:23.586044   35087 start.go:217] waiting for startup goroutines ...
	I1109 10:54:23.609382   35087 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 10:54:23.609394   35087 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 10:54:23.609480   35087 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-104454
	I1109 10:54:23.609603   35087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63800 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:54:23.667230   35087 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63800 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/kubernetes-upgrade-104454/id_rsa Username:docker}
	I1109 10:54:23.702954   35087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 10:54:23.765435   35087 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 10:54:24.367212   35087 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1109 10:54:24.404158   35087 addons.go:488] enableAddons completed in 1.047176265s
	I1109 10:54:24.406543   35087 ssh_runner.go:195] Run: rm -f paused
	I1109 10:54:24.446865   35087 start.go:506] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I1109 10:54:24.468525   35087 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-104454" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-11-09 18:49:07 UTC, end at Wed 2022-11-09 18:54:25 UTC. --
	Nov 09 18:53:53 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:53.778069400Z" level=info msg="ignoring event" container=8f2ad5c9bc22fd953593be820cdca93c968765833320d9d03fb923d65c674e15 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:53:53 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:53.828303176Z" level=info msg="ignoring event" container=c164519219aa8cc12e1766b67c1ee90ada46e433fafc82a13a61544644e86e9d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:53:53 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:53.838525435Z" level=info msg="ignoring event" container=4b02ca893163087b593a89ceffe33919d45a30c968d62ff3d6c10ee0a05a6e4d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:53:53 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:53.845951685Z" level=info msg="ignoring event" container=1f2f43e3e00fef261d994250edd11e0272b1319ea524ffc371b836430d18ceb6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.580433156Z" level=info msg="ignoring event" container=ae45d383399ad1ae452ad26ce05ab46b51a7192e2e9ed53553d7f4bcb88c75b1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.734635953Z" level=info msg="Removing stale sandbox dfc71baef619bc9892a10da89caace67f18a82a93d4bd6c9ac246e073f70aa70 (c164519219aa8cc12e1766b67c1ee90ada46e433fafc82a13a61544644e86e9d)"
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.735954749Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0b6f67083d01eda9872f276c7ff9f7e31aad5634080660b0f537f7a776c15fb0 a90f8be2d14b05fee0142a10e07dcfb05c88b1f274847a5e4b90a22aeb7d813e], retrying...."
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.821197273Z" level=info msg="Removing stale sandbox 272ab11f8a07e78228854a8a3c0839d080eb7db8a7902fea8d2c8431d452fc1b (8f2ad5c9bc22fd953593be820cdca93c968765833320d9d03fb923d65c674e15)"
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.822322581Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0b6f67083d01eda9872f276c7ff9f7e31aad5634080660b0f537f7a776c15fb0 a4f2224651606b098f156612754a21c84b460d32b3f2e1e897f20f797fd7324a], retrying...."
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.900005079Z" level=info msg="Removing stale sandbox 2fe4779ef2918732c2d88988b53e800eb90853466c1f33219b5a7db9c5aeb5ea (18b9663be8ac7937aa2a6bcd304c79e3d0e316ed3957974c6a0881d119d1ae22)"
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.901091815Z" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 0b6f67083d01eda9872f276c7ff9f7e31aad5634080660b0f537f7a776c15fb0 2d6818a1d140380c07a2c043c65061adafc9356c80cb92bae3a8fd7c152cc899], retrying...."
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.922716256Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.956341740Z" level=info msg="Loading containers: done."
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.965256531Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.965366243Z" level=info msg="Daemon has completed initialization"
	Nov 09 18:53:54 kubernetes-upgrade-104454 systemd[1]: Started Docker Application Container Engine.
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.993719246Z" level=info msg="API listen on [::]:2376"
	Nov 09 18:53:54 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:53:54.996256787Z" level=info msg="API listen on /var/run/docker.sock"
	Nov 09 18:54:15 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:54:15.932178975Z" level=info msg="ignoring event" container=9b83a1a3df6bd534c449fc0c03d8ff4b2f9e72e09ed5f8d35e1dff1c4b1b2ea8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:54:15 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:54:15.934673038Z" level=info msg="ignoring event" container=eb694d87f5ed0ce6e0c0940b8a0b17b1254fc782b23a9bb2049778593966328b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:54:15 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:54:15.940086607Z" level=info msg="ignoring event" container=662f692e9a57867a0d7563ce4d1e32c2fce8d70bfad1eed5b26c978d11f8d939 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:54:15 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:54:15.940157217Z" level=info msg="ignoring event" container=7d87dfbeabf686ed5572e5ea3dc58195457c89fac7de4d743b37b7516c5d6600 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:54:15 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:54:15.941992438Z" level=info msg="ignoring event" container=42b377adcd15bf4260948df4c745ed3f31ce1a7dddcb1e06e5e5d41eeb79c5ec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:54:15 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:54:15.943828953Z" level=info msg="ignoring event" container=a62adef85dbdf812d6815477907719adf04922ee45ac9676ee24c63d9949749c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Nov 09 18:54:16 kubernetes-upgrade-104454 dockerd[12108]: time="2022-11-09T18:54:16.321314923Z" level=info msg="ignoring event" container=a9b6924ff0e3d3c704f6a2a4e3f55a7308b28838eb212dc6eb97890e79adad23 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID
	3a44a6af7fad6       6d23ec0e8b87e       8 seconds ago       Running             kube-scheduler            2                   4eba125cdec1b
	b32369a2758ab       6039992312758       8 seconds ago       Running             kube-controller-manager   3                   a72a8eef011aa
	cd23a60e6d49b       0346dbd74bcb9       8 seconds ago       Running             kube-apiserver            2                   47ea8a3415244
	3f1d7ad5c9403       a8a176a5d5d69       8 seconds ago       Running             etcd                      3                   1b158d5927401
	7d87dfbeabf68       6039992312758       14 seconds ago      Exited              kube-controller-manager   2                   662f692e9a578
	a62adef85dbdf       a8a176a5d5d69       15 seconds ago      Exited              etcd                      2                   eb694d87f5ed0
	a9b6924ff0e3d       0346dbd74bcb9       31 seconds ago      Exited              kube-apiserver            1                   42b377adcd15b
	ae45d383399ad       6d23ec0e8b87e       36 seconds ago      Exited              kube-scheduler            1                   18b9663be8ac7
	
	* 
	* ==> describe nodes <==
	* Name:               kubernetes-upgrade-104454
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-104454
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b216797ebc629f5d4ea32d96a0fffe1acee1fa4c
	                    minikube.k8s.io/name=kubernetes-upgrade-104454
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2022_11_09T10_53_44_0700
	                    minikube.k8s.io/version=v1.28.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Nov 2022 18:53:40 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-104454
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 09 Nov 2022 18:54:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Nov 2022 18:54:21 +0000   Wed, 09 Nov 2022 18:53:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Nov 2022 18:54:21 +0000   Wed, 09 Nov 2022 18:53:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Nov 2022 18:54:21 +0000   Wed, 09 Nov 2022 18:53:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Nov 2022 18:54:21 +0000   Wed, 09 Nov 2022 18:54:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    kubernetes-upgrade-104454
	Capacity:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	Allocatable:
	  cpu:                6
	  ephemeral-storage:  115273188Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             6085664Ki
	  pods:               110
	System Info:
	  Machine ID:                 996614ec4c814b87b7ec8ebee3d0e8c9
	  System UUID:                a2a13bae-fb1d-4380-aed2-b26957a86cad
	  Boot ID:                    fdb96f1f-af28-4634-9005-a24337fbfb7f
	  Kernel Version:             5.15.49-linuxkit
	  OS Image:                   Ubuntu 20.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://20.10.20
	  Kubelet Version:            v1.25.3
	  Kube-Proxy Version:         v1.25.3
	Non-terminated Pods:          (4 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-kubernetes-upgrade-104454                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (1%!)(MISSING)       0 (0%!)(MISSING)         42s
	  kube-system                 kube-apiserver-kubernetes-upgrade-104454             250m (4%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-104454    200m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         42s
	  kube-system                 kube-scheduler-kubernetes-upgrade-104454             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (10%!)(MISSING)  0 (0%!)(MISSING)
	  memory             100Mi (1%!)(MISSING)  0 (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From     Message
	  ----    ------                   ----               ----     -------
	  Normal  NodeHasSufficientMemory  49s (x5 over 50s)  kubelet  Node kubernetes-upgrade-104454 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    49s (x5 over 50s)  kubelet  Node kubernetes-upgrade-104454 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     49s (x4 over 50s)  kubelet  Node kubernetes-upgrade-104454 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet  Starting kubelet.
	  Normal  NodeAllocatableEnforced  42s                kubelet  Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  42s                kubelet  Node kubernetes-upgrade-104454 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet  Node kubernetes-upgrade-104454 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet  Node kubernetes-upgrade-104454 status is now: NodeHasSufficientPID
	  Normal  Starting                 9s                 kubelet  Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x6 over 9s)    kubelet  Node kubernetes-upgrade-104454 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x6 over 9s)    kubelet  Node kubernetes-upgrade-104454 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x6 over 9s)    kubelet  Node kubernetes-upgrade-104454 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9s                 kubelet  Updated Node Allocatable limit across pods
	
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> etcd [3f1d7ad5c940] <==
	* {"level":"info","ts":"2022-11-09T18:54:18.612Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"8688e899f7831fc7","local-server-version":"3.5.4","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2022-11-09T18:54:18.617Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2022-11-09T18:54:18.617Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-11-09T18:54:18.617Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-09T18:54:18.618Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-09T18:54:18.618Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2022-11-09T18:54:18.618Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-09T18:54:18.618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 switched to configuration voters=(9694253945895198663)"}
	{"level":"info","ts":"2022-11-09T18:54:18.618Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2022-11-09T18:54:18.618Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-09T18:54:18.618Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2022-11-09T18:54:20.105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 4"}
	{"level":"info","ts":"2022-11-09T18:54:20.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 4"}
	{"level":"info","ts":"2022-11-09T18:54:20.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-11-09T18:54:20.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 5"}
	{"level":"info","ts":"2022-11-09T18:54:20.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2022-11-09T18:54:20.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 5"}
	{"level":"info","ts":"2022-11-09T18:54:20.106Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 5"}
	{"level":"info","ts":"2022-11-09T18:54:20.106Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-09T18:54:20.106Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-09T18:54:20.106Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-104454 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-09T18:54:20.107Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-09T18:54:20.107Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-09T18:54:20.108Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-11-09T18:54:20.108Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> etcd [a62adef85dbd] <==
	* {"level":"info","ts":"2022-11-09T18:54:11.408Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-09T18:54:11.408Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2022-11-09T18:54:11.408Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2022-11-09T18:54:12.700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2022-11-09T18:54:12.700Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2022-11-09T18:54:12.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2022-11-09T18:54:12.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2022-11-09T18:54:12.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-11-09T18:54:12.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2022-11-09T18:54:12.701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2022-11-09T18:54:12.701Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:kubernetes-upgrade-104454 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2022-11-09T18:54:12.701Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-09T18:54:12.701Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2022-11-09T18:54:12.701Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2022-11-09T18:54:12.701Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2022-11-09T18:54:12.702Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2022-11-09T18:54:12.702Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2022-11-09T18:54:15.862Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2022-11-09T18:54:15.862Z","caller":"embed/etcd.go:368","msg":"closing etcd server","name":"kubernetes-upgrade-104454","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	WARNING: 2022/11/09 18:54:15 [core] grpc: addrConn.createTransport failed to connect to {127.0.0.1:2379 127.0.0.1:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused". Reconnecting...
	WARNING: 2022/11/09 18:54:15 [core] grpc: addrConn.createTransport failed to connect to {192.168.67.2:2379 192.168.67.2:2379 <nil> 0 <nil>}. Err: connection error: desc = "transport: Error while dialing dial tcp 192.168.67.2:2379: connect: connection refused". Reconnecting...
	{"level":"info","ts":"2022-11-09T18:54:15.870Z","caller":"etcdserver/server.go:1453","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2022-11-09T18:54:15.871Z","caller":"embed/etcd.go:563","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-09T18:54:15.872Z","caller":"embed/etcd.go:568","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2022-11-09T18:54:15.872Z","caller":"embed/etcd.go:370","msg":"closed etcd server","name":"kubernetes-upgrade-104454","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	* 
	* ==> kernel <==
	*  18:54:27 up  3:53,  0 users,  load average: 4.06, 2.14, 1.33
	Linux kubernetes-upgrade-104454 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kube-apiserver [a9b6924ff0e3] <==
	* W1109 18:54:15.865938       1 logging.go:59] [core] [Channel #38 SubChannel #39] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 18:54:15.865975       1 logging.go:59] [core] [Channel #53 SubChannel #54] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	W1109 18:54:15.866533       1 logging.go:59] [core] [Channel #93 SubChannel #94] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:2379: connect: connection refused"
	I1109 18:54:15.873411       1 controller.go:211] Shutting down kubernetes service endpoint reconciler
	
	* 
	* ==> kube-apiserver [cd23a60e6d49] <==
	* I1109 18:54:21.709632       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I1109 18:54:21.709674       1 shared_informer.go:255] Waiting for caches to sync for crd-autoregister
	I1109 18:54:21.714523       1 controller.go:85] Starting OpenAPI controller
	I1109 18:54:21.714993       1 controller.go:85] Starting OpenAPI V3 controller
	I1109 18:54:21.715039       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1109 18:54:21.716704       1 naming_controller.go:291] Starting NamingConditionController
	I1109 18:54:21.716751       1 establishing_controller.go:76] Starting EstablishingController
	I1109 18:54:21.716766       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1109 18:54:21.716769       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1109 18:54:21.723731       1 controller.go:616] quota admission added evaluator for: leases.coordination.k8s.io
	E1109 18:54:21.726259       1 controller.go:159] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1109 18:54:21.784359       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1109 18:54:21.784711       1 cache.go:39] Caches are synced for autoregister controller
	I1109 18:54:21.785655       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1109 18:54:21.786351       1 apf_controller.go:305] Running API Priority and Fairness config worker
	I1109 18:54:21.786789       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1109 18:54:21.810334       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1109 18:54:21.822467       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1109 18:54:22.499439       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1109 18:54:22.689062       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1109 18:54:23.298318       1 controller.go:616] quota admission added evaluator for: serviceaccounts
	I1109 18:54:23.304427       1 controller.go:616] quota admission added evaluator for: deployments.apps
	I1109 18:54:23.323292       1 controller.go:616] quota admission added evaluator for: daemonsets.apps
	I1109 18:54:23.338002       1 controller.go:616] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1109 18:54:23.343174       1 controller.go:616] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	* 
	* ==> kube-controller-manager [7d87dfbeabf6] <==
	* I1109 18:54:12.843442       1 serving.go:348] Generated self-signed cert in-memory
	I1109 18:54:13.190506       1 controllermanager.go:178] Version: v1.25.3
	I1109 18:54:13.190551       1 controllermanager.go:180] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 18:54:13.191410       1 secure_serving.go:210] Serving securely on 127.0.0.1:10257
	I1109 18:54:13.191436       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1109 18:54:13.191571       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1109 18:54:13.191781       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	* 
	* ==> kube-controller-manager [b32369a2758a] <==
	* I1109 18:54:25.539848       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for poddisruptionbudgets.policy
	I1109 18:54:25.539860       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for statefulsets.apps
	I1109 18:54:25.539891       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for replicasets.apps
	I1109 18:54:25.539902       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for networkpolicies.networking.k8s.io
	I1109 18:54:25.539924       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for podtemplates
	I1109 18:54:25.539933       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for daemonsets.apps
	I1109 18:54:25.539949       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for rolebindings.rbac.authorization.k8s.io
	I1109 18:54:25.539965       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpointslices.discovery.k8s.io
	I1109 18:54:25.539975       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for limitranges
	I1109 18:54:25.539982       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for endpoints
	I1109 18:54:25.540003       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for deployments.apps
	I1109 18:54:25.540020       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
	I1109 18:54:25.540080       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for roles.rbac.authorization.k8s.io
	I1109 18:54:25.540095       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
	W1109 18:54:25.540102       1 shared_informer.go:533] resyncPeriod 12h4m16.230762396s is smaller than resyncCheckPeriod 18h45m52.185986239s and the informer has already started. Changing it to 18h45m52.185986239s
	W1109 18:54:25.540148       1 shared_informer.go:533] resyncPeriod 18h44m58.422235687s is smaller than resyncCheckPeriod 18h45m52.185986239s and the informer has already started. Changing it to 18h45m52.185986239s
	I1109 18:54:25.540199       1 resource_quota_monitor.go:218] QuotaMonitor created object count evaluator for serviceaccounts
	I1109 18:54:25.540315       1 controllermanager.go:603] Started "resourcequota"
	I1109 18:54:25.540388       1 resource_quota_controller.go:277] Starting resource quota controller
	I1109 18:54:25.540397       1 shared_informer.go:255] Waiting for caches to sync for resource quota
	I1109 18:54:25.540430       1 resource_quota_monitor.go:295] QuotaMonitor running
	I1109 18:54:25.683198       1 controllermanager.go:603] Started "ttl"
	I1109 18:54:25.683218       1 ttl_controller.go:120] Starting TTL controller
	I1109 18:54:25.683340       1 shared_informer.go:255] Waiting for caches to sync for TTL
	I1109 18:54:25.733044       1 node_ipam_controller.go:91] Sending events to api server.
	
	* 
	* ==> kube-scheduler [3a44a6af7fad] <==
	* I1109 18:54:19.058936       1 serving.go:348] Generated self-signed cert in-memory
	W1109 18:54:21.724601       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1109 18:54:21.724726       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1109 18:54:21.724735       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1109 18:54:21.724760       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1109 18:54:21.732521       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1109 18:54:21.732555       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1109 18:54:21.733774       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1109 18:54:21.733872       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1109 18:54:21.733925       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 18:54:21.733785       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1109 18:54:21.834944       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kube-scheduler [ae45d383399a] <==
	* I1109 18:53:51.252418       1 serving.go:348] Generated self-signed cert in-memory
	I1109 18:53:53.525958       1 server.go:148] "Starting Kubernetes Scheduler" version="v1.25.3"
	I1109 18:53:53.526018       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W1109 18:53:54.530188       1 secure_serving.go:111] Initial population of client CA failed: client rate limiter Wait returned an error: context canceled
	I1109 18:53:54.530681       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1109 18:53:54.530820       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1109 18:53:54.530916       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1109 18:53:54.530972       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1109 18:53:54.530795       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1109 18:53:54.531202       1 shared_informer.go:258] unable to sync caches for RequestHeaderAuthRequestController
	I1109 18:53:54.531214       1 requestheader_controller.go:176] Shutting down RequestHeaderAuthRequestController
	E1109 18:53:54.531337       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1109 18:53:54.531038       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1109 18:53:54.531203       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 18:53:54.532076       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I1109 18:53:54.532242       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	E1109 18:53:54.532398       1 shared_informer.go:258] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1109 18:53:54.531130       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1109 18:53:54.532599       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E1109 18:53:54.532925       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-11-09 18:49:07 UTC, end at Wed 2022-11-09 18:54:28 UTC. --
	Nov 09 18:54:19 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:19.617774   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:19 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:19.718751   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:19 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:19.819501   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:19 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:19.920795   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:20 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:20.022002   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:20 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:20.122490   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:20 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:20.223400   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:20 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:20.324432   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:20 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:20.425407   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:20 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:20.526278   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:20 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:20.627038   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:20 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:20.727746   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:20 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:20.828883   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:20 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:20.928983   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:21 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:21.029852   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:21 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:21.130964   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:21 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:21.231444   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:21 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:21.332268   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:21 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:21.433061   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:21 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:21.534170   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:21 kubernetes-upgrade-104454 kubelet[13614]: E1109 18:54:21.635539   13614 kubelet.go:2448] "Error getting node" err="node \"kubernetes-upgrade-104454\" not found"
	Nov 09 18:54:21 kubernetes-upgrade-104454 kubelet[13614]: I1109 18:54:21.810013   13614 kubelet_node_status.go:108] "Node was previously registered" node="kubernetes-upgrade-104454"
	Nov 09 18:54:21 kubernetes-upgrade-104454 kubelet[13614]: I1109 18:54:21.810105   13614 kubelet_node_status.go:73] "Successfully registered node" node="kubernetes-upgrade-104454"
	Nov 09 18:54:22 kubernetes-upgrade-104454 kubelet[13614]: I1109 18:54:22.701210   13614 apiserver.go:52] "Watching apiserver"
	Nov 09 18:54:22 kubernetes-upgrade-104454 kubelet[13614]: I1109 18:54:22.743510   13614 reconciler.go:169] "Reconciler: start to sync state"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-104454 -n kubernetes-upgrade-104454
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-104454 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:270: non-running pods: storage-provisioner
helpers_test.go:272: ======> post-mortem[TestKubernetesUpgrade]: describe non-running pods <======
helpers_test.go:275: (dbg) Run:  kubectl --context kubernetes-upgrade-104454 describe pod storage-provisioner
helpers_test.go:275: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-104454 describe pod storage-provisioner: exit status 1 (50.180149ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:277: kubectl --context kubernetes-upgrade-104454 describe pod storage-provisioner: exit status 1
helpers_test.go:175: Cleaning up "kubernetes-upgrade-104454" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-104454
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-104454: (3.06608179s)
--- FAIL: TestKubernetesUpgrade (577.55s)

                                                
                                    
x
+
TestMissingContainerUpgrade (50.5s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:316: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.1737095876.exe start -p missing-upgrade-104403 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.1737095876.exe start -p missing-upgrade-104403 --memory=2200 --driver=docker : exit status 78 (35.165109429s)

                                                
                                                
-- stdout --
	* [missing-upgrade-104403] minikube v1.9.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	* Using the docker driver based on user configuration
	* Starting control plane node m01 in cluster missing-upgrade-104403
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* Deleting "missing-upgrade-104403" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 11.38 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 31.31 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 50.30 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 71.34 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 92.25 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 110.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 131.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.09 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 165.72 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 187.38 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 207.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 229.39 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 249.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 263.12 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 286.41 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 308.62 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 330.00 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 350.25 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 369.94 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 390.92 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 415.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 437.11 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 458.91 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 480.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 502.17 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 522.30 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 530.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 542.91 MiB! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 18:44:21.604844325 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* [DOCKER_RESTART_FAILED] Failed to start docker container. "minikube start -p missing-upgrade-104403" may fix it. creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 18:44:38.159845387 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Suggestion: Remove the incompatible --docker-opt flag if one was provided
	* Related issue: https://github.com/kubernetes/minikube/issues/7070

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.1737095876.exe start -p missing-upgrade-104403 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.1737095876.exe start -p missing-upgrade-104403 --memory=2200 --driver=docker : exit status 70 (4.042858585s)

                                                
                                                
-- stdout --
	* [missing-upgrade-104403] minikube v1.9.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-104403
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-104403" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:316: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.1737095876.exe start -p missing-upgrade-104403 --memory=2200 --driver=docker 
version_upgrade_test.go:316: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.1.1737095876.exe start -p missing-upgrade-104403 --memory=2200 --driver=docker : exit status 70 (4.133678987s)

                                                
                                                
-- stdout --
	* [missing-upgrade-104403] minikube v1.9.1 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	* Using the docker driver based on existing profile
	* Starting control plane node m01 in cluster missing-upgrade-104403
	* Pulling base image ...
	* Updating the running docker "missing-upgrade-104403" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:322: release start failed: exit status 70
panic.go:522: *** TestMissingContainerUpgrade FAILED at 2022-11-09 10:44:51.703515 -0800 PST m=+2516.721288313
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMissingContainerUpgrade]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect missing-upgrade-104403
helpers_test.go:235: (dbg) docker inspect missing-upgrade-104403:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "fe6fc38ac385d5d1f557745012fff5a4178c51c5b4599e44f3ad747e269e0b26",
	        "Created": "2022-11-09T18:44:29.777265991Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 159135,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T18:44:29.992345552Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
	        "ResolvConfPath": "/var/lib/docker/containers/fe6fc38ac385d5d1f557745012fff5a4178c51c5b4599e44f3ad747e269e0b26/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/fe6fc38ac385d5d1f557745012fff5a4178c51c5b4599e44f3ad747e269e0b26/hostname",
	        "HostsPath": "/var/lib/docker/containers/fe6fc38ac385d5d1f557745012fff5a4178c51c5b4599e44f3ad747e269e0b26/hosts",
	        "LogPath": "/var/lib/docker/containers/fe6fc38ac385d5d1f557745012fff5a4178c51c5b4599e44f3ad747e269e0b26/fe6fc38ac385d5d1f557745012fff5a4178c51c5b4599e44f3ad747e269e0b26-json.log",
	        "Name": "/missing-upgrade-104403",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "missing-upgrade-104403:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/51679172d17ab4fb34ea99dd3da05c704906993c9d330974e16c730e8acc2844-init/diff:/var/lib/docker/overlay2/adf43e9e5a547cba3c81ad58a13ce0cabb8055b47dda1a9773228c197ec1bb25/diff:/var/lib/docker/overlay2/a1ad5d662585fd0755745df98e9dd560eae4f83c17196b6705c401b8849560b6/diff:/var/lib/docker/overlay2/b65ab4d9180e458cb3c5d95a7f1611604108a93911873b6eacf99b21f0d79e13/diff:/var/lib/docker/overlay2/6711ed93a15419121e1596eb52e5b3fbb1c3260b5a70286ea862a6bed2498c18/diff:/var/lib/docker/overlay2/0b7f62812d319cafd3b0ecdc5a69625456e984495e6d8270525b24d6b5305a8b/diff:/var/lib/docker/overlay2/fe0b0fd4637acce13df953451faf7cf44c212c3297e795bf4779ad9b78586bf2/diff:/var/lib/docker/overlay2/abb86979eb3adb5617ae06982ce015514373c1a11c53c26a153e9eb9a400136a/diff:/var/lib/docker/overlay2/5b492a5954a50ffc8a17f27a1a143699d0581698e4c2545bf358e41c85bbb913/diff:/var/lib/docker/overlay2/697ebbe64c558705ec8c95f4d52062873e4ab55bdc468bd3e8744cafb216c019/diff:/var/lib/docker/overlay2/eafa9c
71f13dca2cdb5dfbdc82a8a610719008921b2705037fffef109c385b6b/diff:/var/lib/docker/overlay2/65596f0e992c7c35b135f52ae662842139208fecea410c13bf51af9560c1aec6/diff:/var/lib/docker/overlay2/933de91df26a86644ba18fc45850233a1067fa9a9eff2db7a27fab1fd3af8ad9/diff:/var/lib/docker/overlay2/c649483d5cd065cfaa2632de07db045e8cd2c5fb99591e275b01612a4f04e3e6/diff:/var/lib/docker/overlay2/536487bd91bb8f1bd9ef31e39eb56585d1e257d2611bd045a5222a8b024dd7ff/diff:/var/lib/docker/overlay2/15d7006816a41bb58165751d0ccd0d90c91446a6ef8af9228eeaaad9aaa9318a/diff:/var/lib/docker/overlay2/1718e1e95c0786770e4af9b495368e8bfbe0997247281b37064f4beab1086ae0/diff:/var/lib/docker/overlay2/cb4b763a95cd268ecd1734e850256b870a257a506bf8d0154718c2906c11a29f/diff:/var/lib/docker/overlay2/13625002c8224e020493b9afd73b65e21a2bab1396039b2c64126a9f2efc41ed/diff:/var/lib/docker/overlay2/0b5b5d8421147188580f9e20f66b73eaacace1c53792c825c87b6a86e7db6863/diff:/var/lib/docker/overlay2/927b73608b8daedf14b9314365c7341e0bc477aa7479891bff1559a65b7838dc/diff:/var/lib/d
ocker/overlay2/0afc4fde9995e4abd1c497a7eb8b9a854510ccf9d2a1a54d520a04bae419c751/diff",
	                "MergedDir": "/var/lib/docker/overlay2/51679172d17ab4fb34ea99dd3da05c704906993c9d330974e16c730e8acc2844/merged",
	                "UpperDir": "/var/lib/docker/overlay2/51679172d17ab4fb34ea99dd3da05c704906993c9d330974e16c730e8acc2844/diff",
	                "WorkDir": "/var/lib/docker/overlay2/51679172d17ab4fb34ea99dd3da05c704906993c9d330974e16c730e8acc2844/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "missing-upgrade-104403",
	                "Source": "/var/lib/docker/volumes/missing-upgrade-104403/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "missing-upgrade-104403",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	                "container=docker"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "missing-upgrade-104403",
	                "name.minikube.sigs.k8s.io": "missing-upgrade-104403",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0c84402a65606efdad2dc1ec8f3a24143950a73b304c302d67c03a92a9c7ad48",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63543"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63544"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63545"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/0c84402a6560",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "98644435643cc861e0aca56e05614899801cc9e9cefbf3507d417bbf5ea193ad",
	            "Gateway": "172.17.0.1",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "172.17.0.2",
	            "IPPrefixLen": 16,
	            "IPv6Gateway": "",
	            "MacAddress": "02:42:ac:11:00:02",
	            "Networks": {
	                "bridge": {
	                    "IPAMConfig": null,
	                    "Links": null,
	                    "Aliases": null,
	                    "NetworkID": "e8d9424b02579a850439499a33cee1cdbc22bc61e600dad623d03c6ba7a693ad",
	                    "EndpointID": "98644435643cc861e0aca56e05614899801cc9e9cefbf3507d417bbf5ea193ad",
	                    "Gateway": "172.17.0.1",
	                    "IPAddress": "172.17.0.2",
	                    "IPPrefixLen": 16,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:ac:11:00:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-104403 -n missing-upgrade-104403
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p missing-upgrade-104403 -n missing-upgrade-104403: exit status 6 (384.571867ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 10:44:52.135997   32815 status.go:415] kubeconfig endpoint: extract IP: "missing-upgrade-104403" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "missing-upgrade-104403" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "missing-upgrade-104403" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p missing-upgrade-104403
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p missing-upgrade-104403: (2.340394195s)
--- FAIL: TestMissingContainerUpgrade (50.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (45.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:190: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3558667508.exe start -p stopped-upgrade-104552 --memory=2200 --vm-driver=docker 
E1109 10:45:56.412709   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:46:23.855936   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3558667508.exe start -p stopped-upgrade-104552 --memory=2200 --vm-driver=docker : exit status 70 (34.619735836s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-104552] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1605429968
	* Using the docker driver based on user configuration
	* Pulling base image ...
	* Downloading Kubernetes v1.18.0 preload ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 18:46:09.543721480 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* Deleting "stopped-upgrade-104552" in docker ...
	* Creating Kubernetes in docker container with (CPUs=2) (6 available), Memory=2200MB (5943MB available) ...
	* StartHost failed again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 18:46:25.878228604 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	  - Run: "minikube delete -p stopped-upgrade-104552", then "minikube start -p stopped-upgrade-104552 --alsologtostderr -v=1" to try again with more logging

                                                
                                                
-- /stdout --
** stderr ** 
	    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 16.95 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 38.26 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 60.83 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 82.75 MiB     > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 104.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 126.47 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 148.26 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 169.34 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 191.23 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 213.36 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 235.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 257.76 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4
: 279.70 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 301.81 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 323.59 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 341.45 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 361.69 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 383.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 403.97 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 425.84 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 448.08 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 469.89 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 492.64 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 513.37 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.lz4: 535.33 MiB    > preloaded-images-k8s-v2-v1.18.0-docker-overlay2-amd64.tar.
lz4: 542.91 MiB* 
	X Unable to start VM after repeated tries. Please try {{'minikube delete' if possible: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2019-08-29 04:42:14.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 18:46:25.878228604 +0000
	@@ -8,24 +8,22 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	-
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP 
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -33,9 +31,10 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3558667508.exe start -p stopped-upgrade-104552 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3558667508.exe start -p stopped-upgrade-104552 --memory=2200 --vm-driver=docker : exit status 70 (4.343653777s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-104552] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig861111111
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-104552" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:190: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3558667508.exe start -p stopped-upgrade-104552 --memory=2200 --vm-driver=docker 
version_upgrade_test.go:190: (dbg) Non-zero exit: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.9.0.3558667508.exe start -p stopped-upgrade-104552 --memory=2200 --vm-driver=docker : exit status 70 (4.393590983s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-104552] minikube v1.9.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	  - KUBECONFIG=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/legacy_kubeconfig1400807858
	* Using the docker driver based on existing profile
	* Pulling base image ...
	* Updating the running docker "stopped-upgrade-104552" container ...

                                                
                                                
-- /stdout --
** stderr ** 
	* 
	X Failed to enable container runtime: enable docker.: sudo systemctl start docker: exit status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xe" for details.
	
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:196: legacy v1.9.0 start failed: exit status 70
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (45.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (61.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1109 10:59:30.548865   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.108961161s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.110658912s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1109 10:59:40.789182   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.120285185s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.117258708s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:238: (dbg) Run:  kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.109948495s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
E1109 11:00:01.269287   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 11:00:02.037243   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.12197447s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.114411313s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1109 11:00:27.993876   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:00:27.999978   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:00:28.010942   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:00:28.031614   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:00:28.072549   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:00:28.153262   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:00:28.313445   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:00:28.633602   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
net_test.go:238: (dbg) Non-zero exit: kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.337648871s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
net_test.go:243: failed to connect via pod host: exit status 1
--- FAIL: TestNetworkPlugins/group/kubenet/HairPin (61.78s)
E1109 11:21:45.305623   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (249.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-110019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-110019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (4m8.85991218s)

                                                
                                                
-- stdout --
	* [old-k8s-version-110019] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting control plane node old-k8s-version-110019 in cluster old-k8s-version-110019
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 11:00:19.789768   36410 out.go:296] Setting OutFile to fd 1 ...
	I1109 11:00:19.789945   36410 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 11:00:19.789950   36410 out.go:309] Setting ErrFile to fd 2...
	I1109 11:00:19.789954   36410 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 11:00:19.790063   36410 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 11:00:19.790664   36410 out.go:303] Setting JSON to false
	I1109 11:00:19.809565   36410 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":14394,"bootTime":1668006025,"procs":391,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 11:00:19.809660   36410 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 11:00:19.832225   36410 out.go:177] * [old-k8s-version-110019] minikube v1.28.0 on Darwin 13.0
	I1109 11:00:19.853978   36410 notify.go:220] Checking for updates...
	I1109 11:00:19.875664   36410 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 11:00:19.897670   36410 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 11:00:19.920945   36410 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 11:00:19.942893   36410 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 11:00:19.963975   36410 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 11:00:19.986738   36410 config.go:180] Loaded profile config "kubenet-104027": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 11:00:19.986860   36410 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 11:00:20.050733   36410 docker.go:137] docker version: linux-20.10.20
	I1109 11:00:20.050900   36410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 11:00:20.190738   36410 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 19:00:20.10356782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 11:00:20.212501   36410 out.go:177] * Using the docker driver based on user configuration
	I1109 11:00:20.234207   36410 start.go:282] selected driver: docker
	I1109 11:00:20.234244   36410 start.go:808] validating driver "docker" against <nil>
	I1109 11:00:20.234332   36410 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 11:00:20.238204   36410 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 11:00:20.379819   36410 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 19:00:20.291850887 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 11:00:20.379940   36410 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1109 11:00:20.380100   36410 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 11:00:20.401850   36410 out.go:177] * Using Docker Desktop driver with root privileges
	I1109 11:00:20.423502   36410 cni.go:95] Creating CNI manager for ""
	I1109 11:00:20.423554   36410 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:00:20.423573   36410 start_flags.go:317] config:
	{Name:old-k8s-version-110019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-110019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:00:20.445509   36410 out.go:177] * Starting control plane node old-k8s-version-110019 in cluster old-k8s-version-110019
	I1109 11:00:20.466761   36410 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 11:00:20.488285   36410 out.go:177] * Pulling base image ...
	I1109 11:00:20.530965   36410 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 11:00:20.530973   36410 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 11:00:20.531056   36410 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1109 11:00:20.531083   36410 cache.go:57] Caching tarball of preloaded images
	I1109 11:00:20.531873   36410 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 11:00:20.532038   36410 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1109 11:00:20.532565   36410 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/config.json ...
	I1109 11:00:20.532668   36410 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/config.json: {Name:mk4d65ea59bac74e82d3c1d222a08384b16c273c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:00:20.588645   36410 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 11:00:20.588667   36410 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 11:00:20.588676   36410 cache.go:208] Successfully downloaded all kic artifacts
	I1109 11:00:20.588740   36410 start.go:364] acquiring machines lock for old-k8s-version-110019: {Name:mk76b064b5c16d3f79b919264a63d8292ad54339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 11:00:20.588892   36410 start.go:368] acquired machines lock for "old-k8s-version-110019" in 139.567µs
	I1109 11:00:20.588929   36410 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-110019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-110019 Namespace:default APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1109 11:00:20.588998   36410 start.go:125] createHost starting for "" (driver="docker")
	I1109 11:00:20.632444   36410 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1109 11:00:20.632855   36410 start.go:159] libmachine.API.Create for "old-k8s-version-110019" (driver="docker")
	I1109 11:00:20.632904   36410 client.go:168] LocalClient.Create starting
	I1109 11:00:20.633080   36410 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem
	I1109 11:00:20.633175   36410 main.go:134] libmachine: Decoding PEM data...
	I1109 11:00:20.633201   36410 main.go:134] libmachine: Parsing certificate...
	I1109 11:00:20.633293   36410 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem
	I1109 11:00:20.633363   36410 main.go:134] libmachine: Decoding PEM data...
	I1109 11:00:20.633384   36410 main.go:134] libmachine: Parsing certificate...
	I1109 11:00:20.634190   36410 cli_runner.go:164] Run: docker network inspect old-k8s-version-110019 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 11:00:20.689214   36410 cli_runner.go:211] docker network inspect old-k8s-version-110019 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 11:00:20.689325   36410 network_create.go:272] running [docker network inspect old-k8s-version-110019] to gather additional debugging logs...
	I1109 11:00:20.689349   36410 cli_runner.go:164] Run: docker network inspect old-k8s-version-110019
	W1109 11:00:20.743157   36410 cli_runner.go:211] docker network inspect old-k8s-version-110019 returned with exit code 1
	I1109 11:00:20.743187   36410 network_create.go:275] error running [docker network inspect old-k8s-version-110019]: docker network inspect old-k8s-version-110019: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: old-k8s-version-110019
	I1109 11:00:20.743203   36410 network_create.go:277] output of [docker network inspect old-k8s-version-110019]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: old-k8s-version-110019
	
	** /stderr **
	I1109 11:00:20.743311   36410 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 11:00:20.798754   36410 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000d920e0] misses:0}
	I1109 11:00:20.798796   36410 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:00:20.798810   36410 network_create.go:115] attempt to create docker network old-k8s-version-110019 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 11:00:20.798928   36410 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-110019 old-k8s-version-110019
	W1109 11:00:20.853176   36410 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-110019 old-k8s-version-110019 returned with exit code 1
	W1109 11:00:20.853219   36410 network_create.go:107] failed to create docker network old-k8s-version-110019 192.168.49.0/24, will retry: subnet is taken
	I1109 11:00:20.853479   36410 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d920e0] amended:false}} dirty:map[] misses:0}
	I1109 11:00:20.853497   36410 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:00:20.853705   36410 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d920e0] amended:true}} dirty:map[192.168.49.0:0xc000d920e0 192.168.58.0:0xc000ac2b50] misses:0}
	I1109 11:00:20.853718   36410 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:00:20.853726   36410 network_create.go:115] attempt to create docker network old-k8s-version-110019 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1109 11:00:20.853810   36410 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-110019 old-k8s-version-110019
	W1109 11:00:20.907590   36410 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-110019 old-k8s-version-110019 returned with exit code 1
	W1109 11:00:20.907629   36410 network_create.go:107] failed to create docker network old-k8s-version-110019 192.168.58.0/24, will retry: subnet is taken
	I1109 11:00:20.907877   36410 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d920e0] amended:true}} dirty:map[192.168.49.0:0xc000d920e0 192.168.58.0:0xc000ac2b50] misses:1}
	I1109 11:00:20.907895   36410 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:00:20.908095   36410 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc000d920e0] amended:true}} dirty:map[192.168.49.0:0xc000d920e0 192.168.58.0:0xc000ac2b50 192.168.67.0:0xc000d92118] misses:1}
	I1109 11:00:20.908106   36410 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:00:20.908112   36410 network_create.go:115] attempt to create docker network old-k8s-version-110019 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1109 11:00:20.908203   36410 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-110019 old-k8s-version-110019
	I1109 11:00:20.995096   36410 network_create.go:99] docker network old-k8s-version-110019 192.168.67.0/24 created
	I1109 11:00:20.995142   36410 kic.go:106] calculated static IP "192.168.67.2" for the "old-k8s-version-110019" container
	I1109 11:00:20.995277   36410 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 11:00:21.051732   36410 cli_runner.go:164] Run: docker volume create old-k8s-version-110019 --label name.minikube.sigs.k8s.io=old-k8s-version-110019 --label created_by.minikube.sigs.k8s.io=true
	I1109 11:00:21.107380   36410 oci.go:103] Successfully created a docker volume old-k8s-version-110019
	I1109 11:00:21.107510   36410 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-110019-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-110019 --entrypoint /usr/bin/test -v old-k8s-version-110019:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1109 11:00:21.543603   36410 oci.go:107] Successfully prepared a docker volume old-k8s-version-110019
	I1109 11:00:21.543642   36410 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 11:00:21.543657   36410 kic.go:179] Starting extracting preloaded images to volume ...
	I1109 11:00:21.543791   36410 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-110019:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 11:00:25.561690   36410 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-110019:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.017888244s)
	I1109 11:00:25.561709   36410 kic.go:188] duration metric: took 4.018088 seconds to extract preloaded images to volume
	I1109 11:00:25.561834   36410 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 11:00:25.702022   36410 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-110019 --name old-k8s-version-110019 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-110019 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-110019 --network old-k8s-version-110019 --ip 192.168.67.2 --volume old-k8s-version-110019:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8443 --publish=8443 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1109 11:00:26.053921   36410 cli_runner.go:164] Run: docker container inspect old-k8s-version-110019 --format={{.State.Running}}
	I1109 11:00:26.112654   36410 cli_runner.go:164] Run: docker container inspect old-k8s-version-110019 --format={{.State.Status}}
	I1109 11:00:26.173742   36410 cli_runner.go:164] Run: docker exec old-k8s-version-110019 stat /var/lib/dpkg/alternatives/iptables
	I1109 11:00:26.286100   36410 oci.go:144] the created container "old-k8s-version-110019" has a running status.
	I1109 11:00:26.286594   36410 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa...
	I1109 11:00:26.380051   36410 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 11:00:26.485054   36410 cli_runner.go:164] Run: docker container inspect old-k8s-version-110019 --format={{.State.Status}}
	I1109 11:00:26.543883   36410 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 11:00:26.543901   36410 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-110019 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 11:00:26.661696   36410 cli_runner.go:164] Run: docker container inspect old-k8s-version-110019 --format={{.State.Status}}
	I1109 11:00:26.719009   36410 machine.go:88] provisioning docker machine ...
	I1109 11:00:26.719047   36410 ubuntu.go:169] provisioning hostname "old-k8s-version-110019"
	I1109 11:00:26.719164   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:26.775869   36410 main.go:134] libmachine: Using SSH client type: native
	I1109 11:00:26.776058   36410 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 64998 <nil> <nil>}
	I1109 11:00:26.776074   36410 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-110019 && echo "old-k8s-version-110019" | sudo tee /etc/hostname
	I1109 11:00:26.900105   36410 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-110019
	
	I1109 11:00:26.900233   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:26.957668   36410 main.go:134] libmachine: Using SSH client type: native
	I1109 11:00:26.957843   36410 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 64998 <nil> <nil>}
	I1109 11:00:26.957858   36410 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-110019' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-110019/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-110019' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 11:00:27.073472   36410 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 11:00:27.073492   36410 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 11:00:27.073512   36410 ubuntu.go:177] setting up certificates
	I1109 11:00:27.073519   36410 provision.go:83] configureAuth start
	I1109 11:00:27.073616   36410 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110019
	I1109 11:00:27.130664   36410 provision.go:138] copyHostCerts
	I1109 11:00:27.130758   36410 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 11:00:27.130766   36410 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 11:00:27.130891   36410 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 11:00:27.131126   36410 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 11:00:27.131133   36410 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 11:00:27.131201   36410 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 11:00:27.131358   36410 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 11:00:27.131366   36410 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 11:00:27.131436   36410 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 11:00:27.131567   36410 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-110019 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-110019]
	I1109 11:00:27.246800   36410 provision.go:172] copyRemoteCerts
	I1109 11:00:27.246878   36410 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 11:00:27.246940   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:27.303298   36410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64998 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa Username:docker}
	I1109 11:00:27.389428   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 11:00:27.407161   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1109 11:00:27.424121   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 11:00:27.441244   36410 provision.go:86] duration metric: configureAuth took 367.714258ms
	I1109 11:00:27.441256   36410 ubuntu.go:193] setting minikube options for container-runtime
	I1109 11:00:27.441412   36410 config.go:180] Loaded profile config "old-k8s-version-110019": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1109 11:00:27.441489   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:27.498636   36410 main.go:134] libmachine: Using SSH client type: native
	I1109 11:00:27.498809   36410 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 64998 <nil> <nil>}
	I1109 11:00:27.498825   36410 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 11:00:27.616474   36410 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 11:00:27.616487   36410 ubuntu.go:71] root file system type: overlay
	I1109 11:00:27.616610   36410 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 11:00:27.616704   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:27.674021   36410 main.go:134] libmachine: Using SSH client type: native
	I1109 11:00:27.674181   36410 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 64998 <nil> <nil>}
	I1109 11:00:27.674230   36410 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 11:00:27.801503   36410 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 11:00:27.801626   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:27.859782   36410 main.go:134] libmachine: Using SSH client type: native
	I1109 11:00:27.859948   36410 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 64998 <nil> <nil>}
	I1109 11:00:27.859962   36410 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 11:00:28.461618   36410 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 19:00:27.808058757 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1109 11:00:28.461638   36410 machine.go:91] provisioned docker machine in 1.742628395s
	I1109 11:00:28.461645   36410 client.go:171] LocalClient.Create took 7.828806908s
	I1109 11:00:28.461663   36410 start.go:167] duration metric: libmachine.API.Create for "old-k8s-version-110019" took 7.828886706s
	I1109 11:00:28.461673   36410 start.go:300] post-start starting for "old-k8s-version-110019" (driver="docker")
	I1109 11:00:28.461677   36410 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 11:00:28.461754   36410 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 11:00:28.461824   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:28.522074   36410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64998 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa Username:docker}
	I1109 11:00:28.610827   36410 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 11:00:28.614416   36410 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 11:00:28.614433   36410 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 11:00:28.614440   36410 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 11:00:28.614445   36410 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 11:00:28.614455   36410 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 11:00:28.614551   36410 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 11:00:28.614733   36410 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 11:00:28.614944   36410 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 11:00:28.622174   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 11:00:28.641958   36410 start.go:303] post-start completed in 180.276841ms
	I1109 11:00:28.642520   36410 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110019
	I1109 11:00:28.701871   36410 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/config.json ...
	I1109 11:00:28.702338   36410 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 11:00:28.702403   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:28.759551   36410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64998 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa Username:docker}
	I1109 11:00:28.844772   36410 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 11:00:28.849005   36410 start.go:128] duration metric: createHost completed in 8.260066193s
	I1109 11:00:28.849024   36410 start.go:83] releasing machines lock for "old-k8s-version-110019", held for 8.260198544s
	I1109 11:00:28.849135   36410 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110019
	I1109 11:00:28.905951   36410 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1109 11:00:28.905986   36410 ssh_runner.go:195] Run: systemctl --version
	I1109 11:00:28.906046   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:28.906054   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:28.970699   36410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64998 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa Username:docker}
	I1109 11:00:28.970806   36410 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:64998 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa Username:docker}
	I1109 11:00:29.054576   36410 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 11:00:29.302725   36410 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 11:00:29.302793   36410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 11:00:29.313417   36410 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 11:00:29.327787   36410 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 11:00:29.398158   36410 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 11:00:29.475103   36410 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 11:00:29.552580   36410 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 11:00:29.781601   36410 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 11:00:29.817569   36410 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 11:00:29.900063   36410 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	I1109 11:00:29.900279   36410 cli_runner.go:164] Run: docker exec -t old-k8s-version-110019 dig +short host.docker.internal
	I1109 11:00:30.167736   36410 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 11:00:30.167846   36410 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 11:00:30.172184   36410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 11:00:30.182167   36410 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:00:30.240750   36410 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 11:00:30.240856   36410 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 11:00:30.266504   36410 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1109 11:00:30.266525   36410 docker.go:543] Images already preloaded, skipping extraction
	I1109 11:00:30.266612   36410 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 11:00:30.291041   36410 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1109 11:00:30.291059   36410 cache_images.go:84] Images are preloaded, skipping loading
	I1109 11:00:30.291156   36410 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 11:00:30.362433   36410 cni.go:95] Creating CNI manager for ""
	I1109 11:00:30.362448   36410 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:00:30.362463   36410 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 11:00:30.362510   36410 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-110019 NodeName:old-k8s-version-110019 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 11:00:30.362609   36410 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-110019"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-110019
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 11:00:30.362684   36410 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-110019 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-110019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 11:00:30.362760   36410 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1109 11:00:30.371813   36410 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 11:00:30.371889   36410 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 11:00:30.379595   36410 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I1109 11:00:30.395650   36410 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 11:00:30.410734   36410 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1109 11:00:30.424624   36410 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1109 11:00:30.428756   36410 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 11:00:30.440197   36410 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019 for IP: 192.168.67.2
	I1109 11:00:30.440339   36410 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 11:00:30.440413   36410 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 11:00:30.440472   36410 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/client.key
	I1109 11:00:30.440498   36410 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/client.crt with IP's: []
	I1109 11:00:30.705147   36410 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/client.crt ...
	I1109 11:00:30.705167   36410 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/client.crt: {Name:mk6371bb2e27e57c32f2d270508013ce6c6ee368 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:00:30.719919   36410 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/client.key ...
	I1109 11:00:30.719969   36410 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/client.key: {Name:mk4e7947bf778084497182a882a303b2d23178f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:00:30.758037   36410 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.key.c7fa3a9e
	I1109 11:00:30.758104   36410 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1109 11:00:30.906458   36410 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.crt.c7fa3a9e ...
	I1109 11:00:30.906470   36410 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.crt.c7fa3a9e: {Name:mk9a32a67871f8ad1974a164b2423c6aeec75d30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:00:30.923569   36410 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.key.c7fa3a9e ...
	I1109 11:00:30.923598   36410 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.key.c7fa3a9e: {Name:mk75032cfe1d394653b8cee626c1f97c42cb21bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:00:30.924153   36410 certs.go:320] copying /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.crt.c7fa3a9e -> /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.crt
	I1109 11:00:30.924467   36410 certs.go:324] copying /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.key.c7fa3a9e -> /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.key
	I1109 11:00:30.924794   36410 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.key
	I1109 11:00:30.924857   36410 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.crt with IP's: []
	I1109 11:00:31.107424   36410 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.crt ...
	I1109 11:00:31.107442   36410 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.crt: {Name:mk16590090115f79bc6becbf30a8a45dd1ba3411 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:00:31.107821   36410 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.key ...
	I1109 11:00:31.107831   36410 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.key: {Name:mk4f0c1f0c08231469bb59db7735f78a0f361ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:00:31.108321   36410 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 11:00:31.108383   36410 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 11:00:31.108400   36410 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 11:00:31.108447   36410 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 11:00:31.108491   36410 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 11:00:31.108530   36410 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 11:00:31.108629   36410 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 11:00:31.109780   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 11:00:31.131270   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 11:00:31.148325   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 11:00:31.165325   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 11:00:31.182645   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 11:00:31.202980   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 11:00:31.220294   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 11:00:31.238664   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 11:00:31.257106   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 11:00:31.276195   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 11:00:31.293542   36410 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 11:00:31.310426   36410 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 11:00:31.323354   36410 ssh_runner.go:195] Run: openssl version
	I1109 11:00:31.340348   36410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 11:00:31.348559   36410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 11:00:31.352419   36410 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 11:00:31.352468   36410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 11:00:31.357865   36410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 11:00:31.365728   36410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 11:00:31.373475   36410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 11:00:31.378601   36410 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 11:00:31.378645   36410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 11:00:31.383892   36410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 11:00:31.392073   36410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 11:00:31.400096   36410 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:00:31.405343   36410 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:00:31.405410   36410 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:00:31.411950   36410 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 11:00:31.419450   36410 kubeadm.go:396] StartCluster: {Name:old-k8s-version-110019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-110019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:00:31.419575   36410 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 11:00:31.441674   36410 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 11:00:31.449556   36410 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 11:00:31.456748   36410 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1109 11:00:31.456811   36410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 11:00:31.463874   36410 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 11:00:31.463904   36410 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 11:00:31.511245   36410 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1109 11:00:31.511402   36410 kubeadm.go:317] [preflight] Running pre-flight checks
	I1109 11:00:31.820041   36410 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 11:00:31.820152   36410 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 11:00:31.820266   36410 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 11:00:32.046395   36410 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 11:00:32.048484   36410 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 11:00:32.055786   36410 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1109 11:00:32.128817   36410 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 11:00:32.170550   36410 out.go:204]   - Generating certificates and keys ...
	I1109 11:00:32.170643   36410 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1109 11:00:32.170704   36410 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1109 11:00:32.222440   36410 kubeadm.go:317] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1109 11:00:32.284862   36410 kubeadm.go:317] [certs] Generating "front-proxy-ca" certificate and key
	I1109 11:00:32.444382   36410 kubeadm.go:317] [certs] Generating "front-proxy-client" certificate and key
	I1109 11:00:32.606261   36410 kubeadm.go:317] [certs] Generating "etcd/ca" certificate and key
	I1109 11:00:32.914389   36410 kubeadm.go:317] [certs] Generating "etcd/server" certificate and key
	I1109 11:00:32.914511   36410 kubeadm.go:317] [certs] etcd/server serving cert is signed for DNS names [old-k8s-version-110019 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1109 11:00:33.122458   36410 kubeadm.go:317] [certs] Generating "etcd/peer" certificate and key
	I1109 11:00:33.122623   36410 kubeadm.go:317] [certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-110019 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	I1109 11:00:33.218723   36410 kubeadm.go:317] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1109 11:00:33.384220   36410 kubeadm.go:317] [certs] Generating "apiserver-etcd-client" certificate and key
	I1109 11:00:33.552094   36410 kubeadm.go:317] [certs] Generating "sa" key and public key
	I1109 11:00:33.552152   36410 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 11:00:33.752238   36410 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 11:00:33.992471   36410 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 11:00:34.120109   36410 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 11:00:34.341620   36410 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 11:00:34.342220   36410 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 11:00:34.365694   36410 out.go:204]   - Booting up control plane ...
	I1109 11:00:34.365829   36410 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 11:00:34.365947   36410 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 11:00:34.366074   36410 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 11:00:34.366141   36410 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 11:00:34.366282   36410 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 11:01:14.323706   36410 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1109 11:01:14.324585   36410 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:01:14.324811   36410 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:01:19.322901   36410 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:01:19.323153   36410 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:01:29.316612   36410 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:01:29.316897   36410 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:01:49.303835   36410 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:01:49.304100   36410 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:02:29.276086   36410 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:02:29.276284   36410 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:02:29.276299   36410 kubeadm.go:317] 
	I1109 11:02:29.276336   36410 kubeadm.go:317] Unfortunately, an error has occurred:
	I1109 11:02:29.276379   36410 kubeadm.go:317] 	timed out waiting for the condition
	I1109 11:02:29.276385   36410 kubeadm.go:317] 
	I1109 11:02:29.276419   36410 kubeadm.go:317] This error is likely caused by:
	I1109 11:02:29.276448   36410 kubeadm.go:317] 	- The kubelet is not running
	I1109 11:02:29.276574   36410 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 11:02:29.276591   36410 kubeadm.go:317] 
	I1109 11:02:29.276685   36410 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 11:02:29.276719   36410 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1109 11:02:29.276748   36410 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1109 11:02:29.276753   36410 kubeadm.go:317] 
	I1109 11:02:29.276867   36410 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 11:02:29.276996   36410 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1109 11:02:29.277089   36410 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1109 11:02:29.277138   36410 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1109 11:02:29.277229   36410 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1109 11:02:29.277279   36410 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1109 11:02:29.280163   36410 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1109 11:02:29.280280   36410 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1109 11:02:29.280360   36410 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 11:02:29.280419   36410 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 11:02:29.280470   36410 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1109 11:02:29.280627   36410 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-110019 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-110019 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [old-k8s-version-110019 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [old-k8s-version-110019 localhost] and IPs [192.168.67.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1109 11:02:29.280655   36410 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1109 11:02:29.695646   36410 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 11:02:29.705475   36410 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1109 11:02:29.705545   36410 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 11:02:29.712921   36410 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 11:02:29.712939   36410 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 11:02:29.759738   36410 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1109 11:02:29.759795   36410 kubeadm.go:317] [preflight] Running pre-flight checks
	I1109 11:02:30.048319   36410 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 11:02:30.048406   36410 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 11:02:30.048490   36410 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 11:02:30.270662   36410 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 11:02:30.271559   36410 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 11:02:30.278002   36410 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1109 11:02:30.338569   36410 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 11:02:30.360056   36410 out.go:204]   - Generating certificates and keys ...
	I1109 11:02:30.360148   36410 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1109 11:02:30.360234   36410 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1109 11:02:30.360333   36410 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1109 11:02:30.360407   36410 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1109 11:02:30.360461   36410 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1109 11:02:30.360507   36410 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1109 11:02:30.360566   36410 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1109 11:02:30.360611   36410 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1109 11:02:30.360674   36410 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1109 11:02:30.360741   36410 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1109 11:02:30.360803   36410 kubeadm.go:317] [certs] Using the existing "sa" key
	I1109 11:02:30.360853   36410 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 11:02:30.542709   36410 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 11:02:30.650636   36410 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 11:02:30.750737   36410 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 11:02:31.080172   36410 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 11:02:31.080645   36410 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 11:02:31.102088   36410 out.go:204]   - Booting up control plane ...
	I1109 11:02:31.102274   36410 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 11:02:31.102420   36410 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 11:02:31.102562   36410 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 11:02:31.102748   36410 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 11:02:31.102976   36410 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 11:03:11.060440   36410 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1109 11:03:11.061368   36410 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:03:11.061578   36410 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:03:16.059004   36410 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:03:16.059220   36410 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:03:26.052940   36410 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:03:26.053202   36410 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:03:46.040522   36410 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:03:46.040759   36410 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:04:26.013675   36410 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:04:26.013905   36410 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:04:26.013919   36410 kubeadm.go:317] 
	I1109 11:04:26.013966   36410 kubeadm.go:317] Unfortunately, an error has occurred:
	I1109 11:04:26.014013   36410 kubeadm.go:317] 	timed out waiting for the condition
	I1109 11:04:26.014021   36410 kubeadm.go:317] 
	I1109 11:04:26.014055   36410 kubeadm.go:317] This error is likely caused by:
	I1109 11:04:26.014087   36410 kubeadm.go:317] 	- The kubelet is not running
	I1109 11:04:26.014219   36410 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 11:04:26.014237   36410 kubeadm.go:317] 
	I1109 11:04:26.014352   36410 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 11:04:26.014388   36410 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1109 11:04:26.014435   36410 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1109 11:04:26.014452   36410 kubeadm.go:317] 
	I1109 11:04:26.014582   36410 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 11:04:26.014696   36410 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1109 11:04:26.014805   36410 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1109 11:04:26.014861   36410 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1109 11:04:26.014937   36410 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1109 11:04:26.014969   36410 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1109 11:04:26.017489   36410 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1109 11:04:26.017601   36410 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1109 11:04:26.017687   36410 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 11:04:26.017753   36410 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 11:04:26.017819   36410 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1109 11:04:26.017872   36410 kubeadm.go:398] StartCluster complete in 3m54.6005799s
	I1109 11:04:26.017971   36410 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:04:26.040099   36410 logs.go:274] 0 containers: []
	W1109 11:04:26.040111   36410 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:04:26.040196   36410 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:04:26.061764   36410 logs.go:274] 0 containers: []
	W1109 11:04:26.061775   36410 logs.go:276] No container was found matching "etcd"
	I1109 11:04:26.061856   36410 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:04:26.083359   36410 logs.go:274] 0 containers: []
	W1109 11:04:26.083371   36410 logs.go:276] No container was found matching "coredns"
	I1109 11:04:26.083456   36410 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:04:26.106807   36410 logs.go:274] 0 containers: []
	W1109 11:04:26.106818   36410 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:04:26.106904   36410 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:04:26.128896   36410 logs.go:274] 0 containers: []
	W1109 11:04:26.128908   36410 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:04:26.128988   36410 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:04:26.152056   36410 logs.go:274] 0 containers: []
	W1109 11:04:26.152066   36410 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:04:26.152150   36410 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:04:26.174432   36410 logs.go:274] 0 containers: []
	W1109 11:04:26.174458   36410 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:04:26.174545   36410 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:04:26.196120   36410 logs.go:274] 0 containers: []
	W1109 11:04:26.196133   36410 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:04:26.196139   36410 logs.go:123] Gathering logs for kubelet ...
	I1109 11:04:26.196146   36410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:04:26.235846   36410 logs.go:123] Gathering logs for dmesg ...
	I1109 11:04:26.235869   36410 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:04:26.252698   36410 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:04:26.252716   36410 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:04:26.315213   36410 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:04:26.315225   36410 logs.go:123] Gathering logs for Docker ...
	I1109 11:04:26.315231   36410 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:04:26.332623   36410 logs.go:123] Gathering logs for container status ...
	I1109 11:04:26.332636   36410 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:04:28.380428   36410 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047799415s)
	W1109 11:04:28.380541   36410 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1109 11:04:28.380556   36410 out.go:239] * 
	* 
	W1109 11:04:28.380675   36410 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 11:04:28.380690   36410 out.go:239] * 
	* 
	W1109 11:04:28.381393   36410 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 11:04:28.446940   36410 out.go:177] 
	W1109 11:04:28.489061   36410 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 11:04:28.489151   36410 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1109 11:04:28.489204   36410 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1109 11:04:28.548932   36410 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p old-k8s-version-110019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/FirstStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-110019
helpers_test.go:235: (dbg) docker inspect old-k8s-version-110019:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961",
	        "Created": "2022-11-09T19:00:25.764137036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T19:00:26.053200554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hostname",
	        "HostsPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hosts",
	        "LogPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961-json.log",
	        "Name": "/old-k8s-version-110019",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-110019:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-110019",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-110019",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-110019/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-110019",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44494f97785315f2d1fdc6bb319dc28d787c45133affaa204ff4f7752507390b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64998"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64999"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65000"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65001"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65002"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/44494f977853",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-110019": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "179e62f50506",
	                        "old-k8s-version-110019"
	                    ],
	                    "NetworkID": "70a1b44058ab5d3fa2f8c48ca78ea76e689efbb2630885d7458319462051756b",
	                    "EndpointID": "0b18270530563710af60960f244d7aa6644373128ea0b53fb06c325870e567a9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 6 (427.385524ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 11:04:29.103120   37051 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-110019" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-110019" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (249.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-110019 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-110019 create -f testdata/busybox.yaml: exit status 1 (35.161109ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-110019" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-110019 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-110019
helpers_test.go:235: (dbg) docker inspect old-k8s-version-110019:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961",
	        "Created": "2022-11-09T19:00:25.764137036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T19:00:26.053200554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hostname",
	        "HostsPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hosts",
	        "LogPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961-json.log",
	        "Name": "/old-k8s-version-110019",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-110019:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-110019",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-110019",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-110019/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-110019",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44494f97785315f2d1fdc6bb319dc28d787c45133affaa204ff4f7752507390b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64998"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64999"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65000"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65001"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65002"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/44494f977853",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-110019": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "179e62f50506",
	                        "old-k8s-version-110019"
	                    ],
	                    "NetworkID": "70a1b44058ab5d3fa2f8c48ca78ea76e689efbb2630885d7458319462051756b",
	                    "EndpointID": "0b18270530563710af60960f244d7aa6644373128ea0b53fb06c325870e567a9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 6 (401.265659ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 11:04:29.598235   37064 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-110019" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-110019" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-110019
helpers_test.go:235: (dbg) docker inspect old-k8s-version-110019:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961",
	        "Created": "2022-11-09T19:00:25.764137036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T19:00:26.053200554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hostname",
	        "HostsPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hosts",
	        "LogPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961-json.log",
	        "Name": "/old-k8s-version-110019",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-110019:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-110019",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-110019",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-110019/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-110019",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44494f97785315f2d1fdc6bb319dc28d787c45133affaa204ff4f7752507390b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64998"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64999"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65000"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65001"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65002"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/44494f977853",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-110019": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "179e62f50506",
	                        "old-k8s-version-110019"
	                    ],
	                    "NetworkID": "70a1b44058ab5d3fa2f8c48ca78ea76e689efbb2630885d7458319462051756b",
	                    "EndpointID": "0b18270530563710af60960f244d7aa6644373128ea0b53fb06c325870e567a9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 6 (392.938789ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 11:04:30.051708   37078 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-110019" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-110019" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-110019 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1109 11:04:30.674636   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:04:34.706569   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:47.989578   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 11:04:55.186634   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:57.843461   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:04:57.848848   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:04:57.861090   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:04:57.881508   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:04:57.923710   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:04:58.005878   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:04:58.166056   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:04:58.486171   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:04:59.126533   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:05:00.408838   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:05:02.036368   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 11:05:02.968985   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:05:08.091234   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:05:18.331892   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:05:27.990276   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:05:36.146710   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:05:38.814012   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:05:52.596221   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:05:55.758021   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:05:56.519917   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-110019 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m29.19587483s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	unable to recognize "/etc/kubernetes/addons/metrics-apiservice.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-deployment.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-rbac.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	unable to recognize "/etc/kubernetes/addons/metrics-server-service.yaml": Get https://localhost:8443/api?timeout=32s: dial tcp 127.0.0.1:8443: connect: connection refused
	]
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-110019 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-110019 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-110019 describe deploy/metrics-server -n kube-system: exit status 1 (34.707272ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-110019" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-110019 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-110019
helpers_test.go:235: (dbg) docker inspect old-k8s-version-110019:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961",
	        "Created": "2022-11-09T19:00:25.764137036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 261436,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T19:00:26.053200554Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hostname",
	        "HostsPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hosts",
	        "LogPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961-json.log",
	        "Name": "/old-k8s-version-110019",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-110019:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-110019",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-110019",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-110019/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-110019",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "44494f97785315f2d1fdc6bb319dc28d787c45133affaa204ff4f7752507390b",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64998"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "64999"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65000"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65001"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65002"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/44494f977853",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-110019": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "179e62f50506",
	                        "old-k8s-version-110019"
	                    ],
	                    "NetworkID": "70a1b44058ab5d3fa2f8c48ca78ea76e689efbb2630885d7458319462051756b",
	                    "EndpointID": "0b18270530563710af60960f244d7aa6644373128ea0b53fb06c325870e567a9",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 6 (392.781296ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 11:05:59.734513   37174 status.go:415] kubeconfig endpoint: extract IP: "old-k8s-version-110019" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-110019" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (89.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (489.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-110019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0
E1109 11:06:12.605223   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 11:06:19.774265   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:06:28.329543   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 11:06:33.748232   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:06:45.265518   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p old-k8s-version-110019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0: exit status 109 (8m5.428898475s)

                                                
                                                
-- stdout --
	* [old-k8s-version-110019] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	* Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	* Using the docker driver based on existing profile
	* Starting control plane node old-k8s-version-110019 in cluster old-k8s-version-110019
	* Pulling base image ...
	* Restarting existing docker container for "old-k8s-version-110019" ...
	* Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 11:06:01.793213   37204 out.go:296] Setting OutFile to fd 1 ...
	I1109 11:06:01.793417   37204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 11:06:01.793422   37204 out.go:309] Setting ErrFile to fd 2...
	I1109 11:06:01.793426   37204 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 11:06:01.793545   37204 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 11:06:01.794071   37204 out.go:303] Setting JSON to false
	I1109 11:06:01.813327   37204 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":14736,"bootTime":1668006025,"procs":383,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 11:06:01.813426   37204 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 11:06:01.835658   37204 out.go:177] * [old-k8s-version-110019] minikube v1.28.0 on Darwin 13.0
	I1109 11:06:01.878648   37204 notify.go:220] Checking for updates...
	I1109 11:06:01.900458   37204 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 11:06:01.922209   37204 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 11:06:01.943305   37204 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 11:06:01.964434   37204 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 11:06:01.986085   37204 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 11:06:02.007537   37204 config.go:180] Loaded profile config "old-k8s-version-110019": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1109 11:06:02.029158   37204 out.go:177] * Kubernetes 1.25.3 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.25.3
	I1109 11:06:02.050356   37204 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 11:06:02.112685   37204 docker.go:137] docker version: linux-20.10.20
	I1109 11:06:02.112847   37204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 11:06:02.253201   37204 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 19:06:02.175311599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 11:06:02.296689   37204 out.go:177] * Using the docker driver based on existing profile
	I1109 11:06:02.317607   37204 start.go:282] selected driver: docker
	I1109 11:06:02.317629   37204 start.go:808] validating driver "docker" against &{Name:old-k8s-version-110019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-110019 Namespace:default APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:06:02.317792   37204 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 11:06:02.321629   37204 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 11:06:02.462269   37204 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:62 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 19:06:02.38311555 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 11:06:02.462445   37204 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 11:06:02.462463   37204 cni.go:95] Creating CNI manager for ""
	I1109 11:06:02.462475   37204 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:06:02.462485   37204 start_flags.go:317] config:
	{Name:old-k8s-version-110019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-110019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:06:02.505843   37204 out.go:177] * Starting control plane node old-k8s-version-110019 in cluster old-k8s-version-110019
	I1109 11:06:02.529048   37204 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 11:06:02.551129   37204 out.go:177] * Pulling base image ...
	I1109 11:06:02.593991   37204 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 11:06:02.594059   37204 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 11:06:02.594107   37204 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1109 11:06:02.594137   37204 cache.go:57] Caching tarball of preloaded images
	I1109 11:06:02.594378   37204 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 11:06:02.594398   37204 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1109 11:06:02.595361   37204 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/config.json ...
	I1109 11:06:02.650490   37204 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 11:06:02.650507   37204 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 11:06:02.650516   37204 cache.go:208] Successfully downloaded all kic artifacts
	I1109 11:06:02.650571   37204 start.go:364] acquiring machines lock for old-k8s-version-110019: {Name:mk76b064b5c16d3f79b919264a63d8292ad54339 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 11:06:02.650668   37204 start.go:368] acquired machines lock for "old-k8s-version-110019" in 74.752µs
	I1109 11:06:02.650695   37204 start.go:96] Skipping create...Using existing machine configuration
	I1109 11:06:02.650706   37204 fix.go:55] fixHost starting: 
	I1109 11:06:02.650968   37204 cli_runner.go:164] Run: docker container inspect old-k8s-version-110019 --format={{.State.Status}}
	I1109 11:06:02.707369   37204 fix.go:103] recreateIfNeeded on old-k8s-version-110019: state=Stopped err=<nil>
	W1109 11:06:02.707409   37204 fix.go:129] unexpected machine state, will restart: <nil>
	I1109 11:06:02.729121   37204 out.go:177] * Restarting existing docker container for "old-k8s-version-110019" ...
	I1109 11:06:02.750128   37204 cli_runner.go:164] Run: docker start old-k8s-version-110019
	I1109 11:06:03.117404   37204 cli_runner.go:164] Run: docker container inspect old-k8s-version-110019 --format={{.State.Status}}
	I1109 11:06:03.175585   37204 kic.go:415] container "old-k8s-version-110019" state is running.
	I1109 11:06:03.176169   37204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110019
	I1109 11:06:03.277993   37204 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/config.json ...
	I1109 11:06:03.278780   37204 machine.go:88] provisioning docker machine ...
	I1109 11:06:03.278835   37204 ubuntu.go:169] provisioning hostname "old-k8s-version-110019"
	I1109 11:06:03.279003   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:03.364530   37204 main.go:134] libmachine: Using SSH client type: native
	I1109 11:06:03.364741   37204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 65162 <nil> <nil>}
	I1109 11:06:03.364756   37204 main.go:134] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-110019 && echo "old-k8s-version-110019" | sudo tee /etc/hostname
	I1109 11:06:03.504292   37204 main.go:134] libmachine: SSH cmd err, output: <nil>: old-k8s-version-110019
	
	I1109 11:06:03.504417   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:03.564967   37204 main.go:134] libmachine: Using SSH client type: native
	I1109 11:06:03.565119   37204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 65162 <nil> <nil>}
	I1109 11:06:03.565133   37204 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-110019' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-110019/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-110019' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 11:06:03.685080   37204 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 11:06:03.685109   37204 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 11:06:03.685132   37204 ubuntu.go:177] setting up certificates
	I1109 11:06:03.685142   37204 provision.go:83] configureAuth start
	I1109 11:06:03.685250   37204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110019
	I1109 11:06:03.743725   37204 provision.go:138] copyHostCerts
	I1109 11:06:03.743837   37204 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 11:06:03.743848   37204 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 11:06:03.743961   37204 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 11:06:03.744173   37204 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 11:06:03.744180   37204 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 11:06:03.744241   37204 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 11:06:03.744389   37204 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 11:06:03.744395   37204 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 11:06:03.744457   37204 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 11:06:03.744580   37204 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-110019 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube old-k8s-version-110019]
	I1109 11:06:03.810550   37204 provision.go:172] copyRemoteCerts
	I1109 11:06:03.810628   37204 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 11:06:03.810692   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:03.869739   37204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65162 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa Username:docker}
	I1109 11:06:03.957567   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1109 11:06:03.979190   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 11:06:04.005190   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 11:06:04.025106   37204 provision.go:86] duration metric: configureAuth took 339.954792ms
	I1109 11:06:04.025119   37204 ubuntu.go:193] setting minikube options for container-runtime
	I1109 11:06:04.025282   37204 config.go:180] Loaded profile config "old-k8s-version-110019": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1109 11:06:04.025361   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:04.102710   37204 main.go:134] libmachine: Using SSH client type: native
	I1109 11:06:04.102873   37204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 65162 <nil> <nil>}
	I1109 11:06:04.102885   37204 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 11:06:04.223324   37204 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 11:06:04.223337   37204 ubuntu.go:71] root file system type: overlay
	I1109 11:06:04.223507   37204 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 11:06:04.223628   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:04.281605   37204 main.go:134] libmachine: Using SSH client type: native
	I1109 11:06:04.281772   37204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 65162 <nil> <nil>}
	I1109 11:06:04.281824   37204 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 11:06:04.409915   37204 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 11:06:04.410019   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:04.467667   37204 main.go:134] libmachine: Using SSH client type: native
	I1109 11:06:04.467821   37204 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 65162 <nil> <nil>}
	I1109 11:06:04.467841   37204 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 11:06:04.589382   37204 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 11:06:04.589396   37204 machine.go:91] provisioned docker machine in 1.310612965s
	I1109 11:06:04.589408   37204 start.go:300] post-start starting for "old-k8s-version-110019" (driver="docker")
	I1109 11:06:04.589414   37204 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 11:06:04.589486   37204 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 11:06:04.589568   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:04.647913   37204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65162 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa Username:docker}
	I1109 11:06:04.734002   37204 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 11:06:04.737552   37204 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 11:06:04.737568   37204 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 11:06:04.737574   37204 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 11:06:04.737578   37204 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 11:06:04.737587   37204 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 11:06:04.737679   37204 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 11:06:04.737840   37204 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 11:06:04.738030   37204 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 11:06:04.745171   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 11:06:04.763319   37204 start.go:303] post-start completed in 173.903657ms
	I1109 11:06:04.763404   37204 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 11:06:04.763475   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:04.821098   37204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65162 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa Username:docker}
	I1109 11:06:04.905768   37204 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 11:06:04.910178   37204 fix.go:57] fixHost completed within 2.259491119s
	I1109 11:06:04.910193   37204 start.go:83] releasing machines lock for "old-k8s-version-110019", held for 2.25953942s
	I1109 11:06:04.910317   37204 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-110019
	I1109 11:06:04.967074   37204 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/
	I1109 11:06:04.967079   37204 ssh_runner.go:195] Run: systemctl --version
	I1109 11:06:04.967152   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:04.967167   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:05.030844   37204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65162 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa Username:docker}
	I1109 11:06:05.031017   37204 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65162 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/old-k8s-version-110019/id_rsa Username:docker}
	I1109 11:06:05.362559   37204 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 11:06:05.372551   37204 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 11:06:05.372620   37204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 11:06:05.384246   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 11:06:05.396718   37204 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 11:06:05.461363   37204 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 11:06:05.544028   37204 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 11:06:05.614804   37204 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 11:06:05.820815   37204 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 11:06:05.850090   37204 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 11:06:05.901272   37204 out.go:204] * Preparing Kubernetes v1.16.0 on Docker 20.10.20 ...
	I1109 11:06:05.901422   37204 cli_runner.go:164] Run: docker exec -t old-k8s-version-110019 dig +short host.docker.internal
	I1109 11:06:06.014035   37204 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 11:06:06.014154   37204 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 11:06:06.020332   37204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 11:06:06.030391   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:06.087926   37204 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 11:06:06.088022   37204 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 11:06:06.112239   37204 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1109 11:06:06.112259   37204 docker.go:543] Images already preloaded, skipping extraction
	I1109 11:06:06.112371   37204 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 11:06:06.136272   37204 docker.go:613] Got preloaded images: -- stdout --
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-apiserver:v1.16.0
	k8s.gcr.io/kube-proxy:v1.16.0
	k8s.gcr.io/kube-controller-manager:v1.16.0
	k8s.gcr.io/kube-scheduler:v1.16.0
	k8s.gcr.io/etcd:3.3.15-0
	k8s.gcr.io/coredns:1.6.2
	k8s.gcr.io/pause:3.1
	
	-- /stdout --
	I1109 11:06:06.136293   37204 cache_images.go:84] Images are preloaded, skipping loading
	I1109 11:06:06.136402   37204 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 11:06:06.206777   37204 cni.go:95] Creating CNI manager for ""
	I1109 11:06:06.206793   37204 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:06:06.206806   37204 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 11:06:06.206828   37204 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-110019 NodeName:old-k8s-version-110019 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 11:06:06.206945   37204 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "old-k8s-version-110019"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-110019
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.67.2:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 11:06:06.207029   37204 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=old-k8s-version-110019 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-110019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 11:06:06.207102   37204 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I1109 11:06:06.214844   37204 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 11:06:06.214920   37204 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 11:06:06.222031   37204 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (348 bytes)
	I1109 11:06:06.234718   37204 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 11:06:06.247260   37204 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2120 bytes)
	I1109 11:06:06.260039   37204 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I1109 11:06:06.263657   37204 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 11:06:06.273207   37204 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019 for IP: 192.168.67.2
	I1109 11:06:06.273346   37204 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 11:06:06.273404   37204 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 11:06:06.273517   37204 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/client.key
	I1109 11:06:06.273594   37204 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.key.c7fa3a9e
	I1109 11:06:06.273658   37204 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.key
	I1109 11:06:06.273895   37204 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 11:06:06.273936   37204 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 11:06:06.273948   37204 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 11:06:06.273988   37204 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 11:06:06.274025   37204 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 11:06:06.274057   37204 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 11:06:06.274127   37204 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 11:06:06.274746   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 11:06:06.292081   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 11:06:06.310172   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 11:06:06.327489   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/old-k8s-version-110019/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 11:06:06.344349   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 11:06:06.361215   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 11:06:06.378302   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 11:06:06.395514   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 11:06:06.412461   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 11:06:06.429629   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 11:06:06.447517   37204 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 11:06:06.464254   37204 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 11:06:06.476688   37204 ssh_runner.go:195] Run: openssl version
	I1109 11:06:06.482147   37204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 11:06:06.490182   37204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:06:06.493984   37204 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:06:06.494048   37204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:06:06.499504   37204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 11:06:06.507068   37204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 11:06:06.514933   37204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 11:06:06.518984   37204 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 11:06:06.519053   37204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 11:06:06.524676   37204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 11:06:06.531874   37204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 11:06:06.539628   37204 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 11:06:06.543703   37204 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 11:06:06.543750   37204 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 11:06:06.549015   37204 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 11:06:06.556069   37204 kubeadm.go:396] StartCluster: {Name:old-k8s-version-110019 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-110019 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:06:06.556189   37204 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 11:06:06.578091   37204 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 11:06:06.585626   37204 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1109 11:06:06.585640   37204 kubeadm.go:627] restartCluster start
	I1109 11:06:06.585692   37204 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 11:06:06.592591   37204 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:06.592676   37204 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-110019
	I1109 11:06:06.650237   37204 kubeconfig.go:135] verify returned: extract IP: "old-k8s-version-110019" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 11:06:06.650410   37204 kubeconfig.go:146] "old-k8s-version-110019" context is missing from /Users/jenkins/minikube-integration/15331-22028/kubeconfig - will repair!
	I1109 11:06:06.650735   37204 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/kubeconfig: {Name:mk02bb1c68cad934afd737965b2dbda8f5a4ba2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:06:06.652074   37204 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 11:06:06.659453   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:06.659513   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:06.667754   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:06.868948   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:06.869098   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:06.879273   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:07.068565   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:07.068715   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:07.080084   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:07.269884   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:07.270076   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:07.280955   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:07.468458   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:07.468654   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:07.479474   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:07.668927   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:07.669083   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:07.680276   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:07.867839   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:07.868027   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:07.879008   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:08.069932   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:08.070066   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:08.080730   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:08.269258   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:08.269418   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:08.280481   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:08.469880   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:08.470044   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:08.480391   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:08.669879   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:08.670044   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:08.680960   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:08.867887   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:08.868062   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:08.878654   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:09.067871   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:09.068045   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:09.078445   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:09.268832   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:09.268987   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:09.281390   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:09.469963   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:09.470077   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:09.480714   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:09.669881   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:09.670047   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:09.681113   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:09.681123   37204 api_server.go:165] Checking apiserver status ...
	I1109 11:06:09.681181   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:06:09.689332   37204 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:06:09.689344   37204 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1109 11:06:09.689353   37204 kubeadm.go:1114] stopping kube-system containers ...
	I1109 11:06:09.689439   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 11:06:09.710753   37204 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1109 11:06:09.721081   37204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 11:06:09.729053   37204 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5691 Nov  9 19:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5731 Nov  9 19:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5791 Nov  9 19:02 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5675 Nov  9 19:02 /etc/kubernetes/scheduler.conf
	
	I1109 11:06:09.729122   37204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 11:06:09.736765   37204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 11:06:09.744510   37204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 11:06:09.751752   37204 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 11:06:09.759529   37204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 11:06:09.767161   37204 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1109 11:06:09.767173   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:06:09.825148   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:06:10.475282   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:06:10.683103   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:06:10.740786   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:06:10.802784   37204 api_server.go:51] waiting for apiserver process to appear ...
	I1109 11:06:10.802856   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:11.311437   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:11.812494   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:12.311267   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:12.811973   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:13.311534   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:13.813278   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:14.312021   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:14.811523   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:15.311967   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:15.813241   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:16.313224   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:16.811355   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:17.313217   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:17.811956   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:18.311958   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:18.811085   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:19.313227   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:19.813186   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:20.311038   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:20.811766   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:21.313002   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:21.811020   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:22.311284   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:22.811008   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:23.311134   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:23.811134   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:24.311211   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:24.811132   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:25.311707   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:25.812948   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:26.311145   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:26.811010   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:27.310951   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:27.811053   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:28.311105   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:28.811088   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:29.311689   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:29.811927   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:30.311969   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:30.811070   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:31.312028   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:31.811347   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:32.311615   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:32.811006   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:33.311180   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:33.811282   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:34.311569   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:34.812951   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:35.311005   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:35.811004   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:36.313081   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:36.812946   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:37.312300   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:37.811127   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:38.312380   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:38.812358   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:39.311314   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:39.811493   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:40.311184   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:40.810963   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:41.311394   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:41.810896   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:42.311405   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:42.810984   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:43.310958   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:43.811002   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:44.312685   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:44.811471   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:45.311011   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:45.810937   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:46.310839   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:46.811060   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:47.312775   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:47.811493   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:48.311373   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:48.810943   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:49.311281   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:49.811544   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:50.312069   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:50.812609   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:51.312877   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:51.810861   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:52.310791   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:52.811187   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:53.310754   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:53.810724   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:54.311283   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:54.810784   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:55.310778   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:55.810885   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:56.310886   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:56.810803   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:57.311543   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:57.810983   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:58.311651   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:58.811223   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:59.310805   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:06:59.810785   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:00.310742   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:00.810696   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:01.310778   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:01.810706   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:02.310677   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:02.810891   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:03.310792   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:03.812666   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:04.310971   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:04.812747   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:05.311828   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:05.810924   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:06.312802   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:06.812750   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:07.312326   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:07.812736   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:08.310716   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:08.810961   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:09.310790   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:09.810707   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:10.310690   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:10.810977   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:07:10.833721   37204 logs.go:274] 0 containers: []
	W1109 11:07:10.833732   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:07:10.833832   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:07:10.856170   37204 logs.go:274] 0 containers: []
	W1109 11:07:10.856183   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:07:10.856270   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:07:10.879101   37204 logs.go:274] 0 containers: []
	W1109 11:07:10.879116   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:07:10.879197   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:07:10.902473   37204 logs.go:274] 0 containers: []
	W1109 11:07:10.902485   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:07:10.902585   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:07:10.925317   37204 logs.go:274] 0 containers: []
	W1109 11:07:10.925329   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:07:10.925415   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:07:10.948883   37204 logs.go:274] 0 containers: []
	W1109 11:07:10.948894   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:07:10.948987   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:07:10.971580   37204 logs.go:274] 0 containers: []
	W1109 11:07:10.971592   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:07:10.971685   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:07:10.993864   37204 logs.go:274] 0 containers: []
	W1109 11:07:10.993875   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:07:10.993882   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:07:10.993889   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:07:11.032899   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:07:11.032914   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:07:11.045025   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:07:11.045037   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:07:11.110773   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:07:11.110788   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:07:11.110794   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:07:11.124427   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:07:11.124439   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:07:13.177001   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052569974s)
	I1109 11:07:15.677361   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:15.811253   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:07:15.836355   37204 logs.go:274] 0 containers: []
	W1109 11:07:15.836367   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:07:15.836457   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:07:15.864313   37204 logs.go:274] 0 containers: []
	W1109 11:07:15.864327   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:07:15.864443   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:07:15.889758   37204 logs.go:274] 0 containers: []
	W1109 11:07:15.889770   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:07:15.889868   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:07:15.914479   37204 logs.go:274] 0 containers: []
	W1109 11:07:15.914495   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:07:15.914585   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:07:15.938454   37204 logs.go:274] 0 containers: []
	W1109 11:07:15.938467   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:07:15.938549   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:07:15.961672   37204 logs.go:274] 0 containers: []
	W1109 11:07:15.961683   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:07:15.961767   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:07:15.985245   37204 logs.go:274] 0 containers: []
	W1109 11:07:15.985257   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:07:15.985339   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:07:16.010785   37204 logs.go:274] 0 containers: []
	W1109 11:07:16.010796   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:07:16.010811   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:07:16.010822   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:07:16.024792   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:07:16.024806   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:07:18.090484   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.065684579s)
	I1109 11:07:18.090606   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:07:18.090613   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:07:18.137004   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:07:18.137026   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:07:18.155363   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:07:18.155379   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:07:18.225193   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:07:20.725494   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:20.810937   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:07:20.836364   37204 logs.go:274] 0 containers: []
	W1109 11:07:20.836379   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:07:20.836472   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:07:20.859170   37204 logs.go:274] 0 containers: []
	W1109 11:07:20.859182   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:07:20.859264   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:07:20.883257   37204 logs.go:274] 0 containers: []
	W1109 11:07:20.883272   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:07:20.883362   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:07:20.907692   37204 logs.go:274] 0 containers: []
	W1109 11:07:20.907706   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:07:20.907800   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:07:20.932308   37204 logs.go:274] 0 containers: []
	W1109 11:07:20.932322   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:07:20.932416   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:07:20.956054   37204 logs.go:274] 0 containers: []
	W1109 11:07:20.956073   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:07:20.956160   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:07:20.979909   37204 logs.go:274] 0 containers: []
	W1109 11:07:20.979921   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:07:20.980006   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:07:21.003792   37204 logs.go:274] 0 containers: []
	W1109 11:07:21.003804   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:07:21.003812   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:07:21.003818   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:07:21.043730   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:07:21.043756   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:07:21.057051   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:07:21.057065   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:07:21.113933   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:07:21.113947   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:07:21.113954   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:07:21.129621   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:07:21.129634   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:07:23.177088   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047398312s)
	I1109 11:07:25.677409   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:25.810608   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:07:25.840225   37204 logs.go:274] 0 containers: []
	W1109 11:07:25.840238   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:07:25.840337   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:07:25.873549   37204 logs.go:274] 0 containers: []
	W1109 11:07:25.873561   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:07:25.873652   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:07:25.906951   37204 logs.go:274] 0 containers: []
	W1109 11:07:25.906966   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:07:25.907075   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:07:25.937498   37204 logs.go:274] 0 containers: []
	W1109 11:07:25.937512   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:07:25.937611   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:07:25.968721   37204 logs.go:274] 0 containers: []
	W1109 11:07:25.968737   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:07:25.968834   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:07:25.996367   37204 logs.go:274] 0 containers: []
	W1109 11:07:25.996385   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:07:25.996482   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:07:26.025632   37204 logs.go:274] 0 containers: []
	W1109 11:07:26.025644   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:07:26.025729   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:07:26.052346   37204 logs.go:274] 0 containers: []
	W1109 11:07:26.052358   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:07:26.052365   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:07:26.052372   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:07:26.098704   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:07:26.098723   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:07:26.112860   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:07:26.112877   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:07:26.180951   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:07:26.180970   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:07:26.180977   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:07:26.197945   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:07:26.197963   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:07:28.251304   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053347747s)
	I1109 11:07:30.751834   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:30.810692   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:07:30.835163   37204 logs.go:274] 0 containers: []
	W1109 11:07:30.835184   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:07:30.835273   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:07:30.857967   37204 logs.go:274] 0 containers: []
	W1109 11:07:30.857979   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:07:30.858073   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:07:30.880525   37204 logs.go:274] 0 containers: []
	W1109 11:07:30.880537   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:07:30.880614   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:07:30.905392   37204 logs.go:274] 0 containers: []
	W1109 11:07:30.905405   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:07:30.905509   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:07:30.928465   37204 logs.go:274] 0 containers: []
	W1109 11:07:30.928478   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:07:30.928557   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:07:30.952878   37204 logs.go:274] 0 containers: []
	W1109 11:07:30.952893   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:07:30.952977   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:07:30.976365   37204 logs.go:274] 0 containers: []
	W1109 11:07:30.976381   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:07:30.976483   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:07:30.998959   37204 logs.go:274] 0 containers: []
	W1109 11:07:30.998971   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:07:30.998979   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:07:30.998986   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:07:33.054222   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.055242804s)
	I1109 11:07:33.054335   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:07:33.054342   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:07:33.102406   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:07:33.102421   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:07:33.115371   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:07:33.115386   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:07:33.178046   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:07:33.178059   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:07:33.178068   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:07:35.694981   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:35.810444   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:07:35.836738   37204 logs.go:274] 0 containers: []
	W1109 11:07:35.836752   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:07:35.836840   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:07:35.864204   37204 logs.go:274] 0 containers: []
	W1109 11:07:35.864217   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:07:35.864301   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:07:35.885846   37204 logs.go:274] 0 containers: []
	W1109 11:07:35.885859   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:07:35.885948   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:07:35.909051   37204 logs.go:274] 0 containers: []
	W1109 11:07:35.909062   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:07:35.909145   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:07:35.931956   37204 logs.go:274] 0 containers: []
	W1109 11:07:35.931969   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:07:35.932057   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:07:35.958215   37204 logs.go:274] 0 containers: []
	W1109 11:07:35.958231   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:07:35.958334   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:07:35.983459   37204 logs.go:274] 0 containers: []
	W1109 11:07:35.983471   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:07:35.983558   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:07:36.007341   37204 logs.go:274] 0 containers: []
	W1109 11:07:36.007352   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:07:36.007358   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:07:36.007366   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:07:36.020219   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:07:36.020233   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:07:36.081582   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:07:36.081595   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:07:36.081603   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:07:36.096574   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:07:36.096587   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:07:38.148452   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051866514s)
	I1109 11:07:38.148570   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:07:38.148578   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:07:40.691276   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:40.811786   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:07:40.835175   37204 logs.go:274] 0 containers: []
	W1109 11:07:40.835200   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:07:40.835283   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:07:40.857910   37204 logs.go:274] 0 containers: []
	W1109 11:07:40.857922   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:07:40.858007   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:07:40.879517   37204 logs.go:274] 0 containers: []
	W1109 11:07:40.879529   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:07:40.879613   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:07:40.902028   37204 logs.go:274] 0 containers: []
	W1109 11:07:40.902039   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:07:40.902119   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:07:40.924801   37204 logs.go:274] 0 containers: []
	W1109 11:07:40.924813   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:07:40.924897   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:07:40.946213   37204 logs.go:274] 0 containers: []
	W1109 11:07:40.946224   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:07:40.946307   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:07:40.969343   37204 logs.go:274] 0 containers: []
	W1109 11:07:40.969354   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:07:40.969438   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:07:40.990918   37204 logs.go:274] 0 containers: []
	W1109 11:07:40.990929   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:07:40.990937   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:07:40.990943   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:07:41.004774   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:07:41.004786   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:07:43.051549   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046767755s)
	I1109 11:07:43.051721   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:07:43.051730   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:07:43.095051   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:07:43.095074   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:07:43.109563   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:07:43.109578   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:07:43.170354   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:07:45.670560   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:45.810324   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:07:45.839802   37204 logs.go:274] 0 containers: []
	W1109 11:07:45.839819   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:07:45.839907   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:07:45.867967   37204 logs.go:274] 0 containers: []
	W1109 11:07:45.867980   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:07:45.868071   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:07:45.893249   37204 logs.go:274] 0 containers: []
	W1109 11:07:45.893262   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:07:45.893348   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:07:45.920455   37204 logs.go:274] 0 containers: []
	W1109 11:07:45.920469   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:07:45.920557   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:07:45.945003   37204 logs.go:274] 0 containers: []
	W1109 11:07:45.945016   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:07:45.945100   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:07:45.969159   37204 logs.go:274] 0 containers: []
	W1109 11:07:45.969170   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:07:45.969253   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:07:45.990779   37204 logs.go:274] 0 containers: []
	W1109 11:07:45.990790   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:07:45.990879   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:07:46.013571   37204 logs.go:274] 0 containers: []
	W1109 11:07:46.013584   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:07:46.013591   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:07:46.013599   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:07:46.026425   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:07:46.026440   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:07:46.088267   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:07:46.088278   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:07:46.088300   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:07:46.102874   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:07:46.102888   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:07:48.149219   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046337841s)
	I1109 11:07:48.149334   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:07:48.149341   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:07:50.688124   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:50.812391   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:07:50.836349   37204 logs.go:274] 0 containers: []
	W1109 11:07:50.836361   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:07:50.836442   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:07:50.857680   37204 logs.go:274] 0 containers: []
	W1109 11:07:50.857691   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:07:50.857777   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:07:50.880913   37204 logs.go:274] 0 containers: []
	W1109 11:07:50.880926   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:07:50.881010   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:07:50.903421   37204 logs.go:274] 0 containers: []
	W1109 11:07:50.903432   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:07:50.903512   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:07:50.925107   37204 logs.go:274] 0 containers: []
	W1109 11:07:50.925118   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:07:50.925205   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:07:50.948129   37204 logs.go:274] 0 containers: []
	W1109 11:07:50.948141   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:07:50.948233   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:07:50.971555   37204 logs.go:274] 0 containers: []
	W1109 11:07:50.971567   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:07:50.971649   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:07:50.993547   37204 logs.go:274] 0 containers: []
	W1109 11:07:50.993559   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:07:50.993566   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:07:50.993574   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:07:51.032512   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:07:51.032526   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:07:51.044340   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:07:51.044351   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:07:51.099087   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:07:51.099098   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:07:51.099107   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:07:51.113752   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:07:51.113765   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:07:53.162327   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048568704s)
	I1109 11:07:55.663776   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:07:55.810294   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:07:55.833583   37204 logs.go:274] 0 containers: []
	W1109 11:07:55.833595   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:07:55.833678   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:07:55.856160   37204 logs.go:274] 0 containers: []
	W1109 11:07:55.856172   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:07:55.856268   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:07:55.878116   37204 logs.go:274] 0 containers: []
	W1109 11:07:55.878129   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:07:55.878223   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:07:55.900557   37204 logs.go:274] 0 containers: []
	W1109 11:07:55.900568   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:07:55.900648   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:07:55.922937   37204 logs.go:274] 0 containers: []
	W1109 11:07:55.922949   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:07:55.923033   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:07:55.944092   37204 logs.go:274] 0 containers: []
	W1109 11:07:55.944105   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:07:55.944197   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:07:55.966647   37204 logs.go:274] 0 containers: []
	W1109 11:07:55.966658   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:07:55.966740   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:07:55.989291   37204 logs.go:274] 0 containers: []
	W1109 11:07:55.989303   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:07:55.989310   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:07:55.989317   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:07:56.026778   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:07:56.026790   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:07:56.038206   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:07:56.038221   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:07:56.091590   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:07:56.091600   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:07:56.091607   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:07:56.105470   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:07:56.105482   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:07:58.152238   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046762403s)
	I1109 11:08:00.652519   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:00.810428   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:00.837112   37204 logs.go:274] 0 containers: []
	W1109 11:08:00.837124   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:00.837219   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:00.869965   37204 logs.go:274] 0 containers: []
	W1109 11:08:00.869978   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:00.870072   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:00.894101   37204 logs.go:274] 0 containers: []
	W1109 11:08:00.894114   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:00.894206   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:00.941979   37204 logs.go:274] 0 containers: []
	W1109 11:08:00.942004   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:00.942159   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:00.980624   37204 logs.go:274] 0 containers: []
	W1109 11:08:00.980637   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:00.980723   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:01.010290   37204 logs.go:274] 0 containers: []
	W1109 11:08:01.010305   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:01.010402   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:01.042482   37204 logs.go:274] 0 containers: []
	W1109 11:08:01.042493   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:01.042587   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:01.072712   37204 logs.go:274] 0 containers: []
	W1109 11:08:01.072725   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:01.072733   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:01.072740   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:01.129280   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:01.129303   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:01.145546   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:01.145566   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:01.206191   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:01.206216   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:01.206225   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:01.224209   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:01.224227   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:03.278019   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.053798349s)
	I1109 11:08:05.778579   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:05.810376   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:05.836681   37204 logs.go:274] 0 containers: []
	W1109 11:08:05.836692   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:05.836775   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:05.860117   37204 logs.go:274] 0 containers: []
	W1109 11:08:05.860129   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:05.860214   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:05.882407   37204 logs.go:274] 0 containers: []
	W1109 11:08:05.882419   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:05.882501   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:05.905749   37204 logs.go:274] 0 containers: []
	W1109 11:08:05.905761   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:05.905843   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:05.928244   37204 logs.go:274] 0 containers: []
	W1109 11:08:05.928256   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:05.928340   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:05.951595   37204 logs.go:274] 0 containers: []
	W1109 11:08:05.951608   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:05.951692   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:05.975488   37204 logs.go:274] 0 containers: []
	W1109 11:08:05.975499   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:05.975579   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:05.997419   37204 logs.go:274] 0 containers: []
	W1109 11:08:05.997430   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:05.997437   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:05.997444   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:08.046168   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048731322s)
	I1109 11:08:08.046297   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:08.046307   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:08.087095   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:08.087110   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:08.098892   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:08.098904   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:08.153541   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:08.153552   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:08.153559   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:10.670155   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:10.810603   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:10.833665   37204 logs.go:274] 0 containers: []
	W1109 11:08:10.833678   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:10.833762   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:10.856938   37204 logs.go:274] 0 containers: []
	W1109 11:08:10.856951   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:10.857033   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:10.878407   37204 logs.go:274] 0 containers: []
	W1109 11:08:10.878419   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:10.878501   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:10.900473   37204 logs.go:274] 0 containers: []
	W1109 11:08:10.900485   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:10.900571   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:10.923308   37204 logs.go:274] 0 containers: []
	W1109 11:08:10.923319   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:10.923402   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:10.945747   37204 logs.go:274] 0 containers: []
	W1109 11:08:10.945762   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:10.945845   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:10.967643   37204 logs.go:274] 0 containers: []
	W1109 11:08:10.967655   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:10.967738   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:10.991694   37204 logs.go:274] 0 containers: []
	W1109 11:08:10.991706   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:10.991714   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:10.991721   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:11.005824   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:11.005837   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:13.054143   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04831336s)
	I1109 11:08:13.054262   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:13.054269   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:13.093922   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:13.093937   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:13.105341   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:13.105353   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:13.159475   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:15.660099   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:15.810024   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:15.834688   37204 logs.go:274] 0 containers: []
	W1109 11:08:15.834705   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:15.834795   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:15.858026   37204 logs.go:274] 0 containers: []
	W1109 11:08:15.858039   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:15.858123   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:15.881607   37204 logs.go:274] 0 containers: []
	W1109 11:08:15.881620   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:15.881702   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:15.904207   37204 logs.go:274] 0 containers: []
	W1109 11:08:15.904219   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:15.904306   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:15.927495   37204 logs.go:274] 0 containers: []
	W1109 11:08:15.927507   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:15.927596   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:15.951070   37204 logs.go:274] 0 containers: []
	W1109 11:08:15.951083   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:15.951169   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:15.973069   37204 logs.go:274] 0 containers: []
	W1109 11:08:15.973081   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:15.973161   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:15.995720   37204 logs.go:274] 0 containers: []
	W1109 11:08:15.995733   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:15.995740   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:15.995747   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:16.009443   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:16.009456   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:18.054966   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045515373s)
	I1109 11:08:18.055097   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:18.055105   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:18.096188   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:18.096206   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:18.109383   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:18.109396   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:18.163264   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:20.665564   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:20.812138   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:20.835459   37204 logs.go:274] 0 containers: []
	W1109 11:08:20.835470   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:20.835554   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:20.857266   37204 logs.go:274] 0 containers: []
	W1109 11:08:20.857280   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:20.857364   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:20.880976   37204 logs.go:274] 0 containers: []
	W1109 11:08:20.880990   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:20.881079   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:20.903346   37204 logs.go:274] 0 containers: []
	W1109 11:08:20.903357   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:20.903441   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:20.925735   37204 logs.go:274] 0 containers: []
	W1109 11:08:20.925747   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:20.925832   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:20.948282   37204 logs.go:274] 0 containers: []
	W1109 11:08:20.948295   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:20.948378   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:20.969642   37204 logs.go:274] 0 containers: []
	W1109 11:08:20.969653   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:20.969735   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:20.992213   37204 logs.go:274] 0 containers: []
	W1109 11:08:20.992225   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:20.992232   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:20.992238   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:21.029850   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:21.029867   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:21.042700   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:21.042714   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:21.098971   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:21.098983   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:21.098990   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:21.115334   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:21.115350   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:23.164418   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049071557s)
	I1109 11:08:25.664881   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:25.812115   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:25.836877   37204 logs.go:274] 0 containers: []
	W1109 11:08:25.836889   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:25.836971   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:25.859611   37204 logs.go:274] 0 containers: []
	W1109 11:08:25.859624   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:25.859708   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:25.881902   37204 logs.go:274] 0 containers: []
	W1109 11:08:25.881913   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:25.881998   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:25.904409   37204 logs.go:274] 0 containers: []
	W1109 11:08:25.904420   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:25.904503   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:25.927790   37204 logs.go:274] 0 containers: []
	W1109 11:08:25.927802   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:25.927886   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:25.952752   37204 logs.go:274] 0 containers: []
	W1109 11:08:25.952763   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:25.952845   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:25.974785   37204 logs.go:274] 0 containers: []
	W1109 11:08:25.974796   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:25.974879   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:25.996880   37204 logs.go:274] 0 containers: []
	W1109 11:08:25.996892   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:25.996899   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:25.996907   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:26.050748   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:26.050758   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:26.050764   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:26.064391   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:26.064404   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:28.109657   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045258963s)
	I1109 11:08:28.109773   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:28.109781   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:28.147077   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:28.147091   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:30.659791   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:30.811006   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:30.836824   37204 logs.go:274] 0 containers: []
	W1109 11:08:30.836839   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:30.836923   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:30.863924   37204 logs.go:274] 0 containers: []
	W1109 11:08:30.863937   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:30.864021   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:30.886680   37204 logs.go:274] 0 containers: []
	W1109 11:08:30.886692   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:30.886780   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:30.912364   37204 logs.go:274] 0 containers: []
	W1109 11:08:30.912375   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:30.912461   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:30.934736   37204 logs.go:274] 0 containers: []
	W1109 11:08:30.934748   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:30.934830   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:30.956509   37204 logs.go:274] 0 containers: []
	W1109 11:08:30.956522   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:30.956606   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:30.979727   37204 logs.go:274] 0 containers: []
	W1109 11:08:30.979739   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:30.979825   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:31.002999   37204 logs.go:274] 0 containers: []
	W1109 11:08:31.003012   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:31.003020   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:31.003028   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:31.019648   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:31.019662   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:33.063933   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.044275691s)
	I1109 11:08:33.064078   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:33.064088   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:33.107859   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:33.107882   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:33.119844   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:33.119857   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:33.179191   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:35.679561   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:35.810994   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:35.834830   37204 logs.go:274] 0 containers: []
	W1109 11:08:35.834842   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:35.834925   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:35.856639   37204 logs.go:274] 0 containers: []
	W1109 11:08:35.856651   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:35.856733   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:35.878605   37204 logs.go:274] 0 containers: []
	W1109 11:08:35.878616   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:35.878701   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:35.900560   37204 logs.go:274] 0 containers: []
	W1109 11:08:35.900571   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:35.900659   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:35.923325   37204 logs.go:274] 0 containers: []
	W1109 11:08:35.923336   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:35.923416   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:35.944877   37204 logs.go:274] 0 containers: []
	W1109 11:08:35.944889   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:35.944987   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:35.967100   37204 logs.go:274] 0 containers: []
	W1109 11:08:35.967112   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:35.967194   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:35.988565   37204 logs.go:274] 0 containers: []
	W1109 11:08:35.988577   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:35.988584   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:35.988591   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:36.044183   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:36.044194   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:36.044201   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:36.058325   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:36.058338   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:38.107266   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048933104s)
	I1109 11:08:38.107388   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:38.107397   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:38.144947   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:38.144965   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:40.659655   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:40.809804   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:40.834842   37204 logs.go:274] 0 containers: []
	W1109 11:08:40.834854   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:40.834946   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:40.858968   37204 logs.go:274] 0 containers: []
	W1109 11:08:40.858979   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:40.859061   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:40.889473   37204 logs.go:274] 0 containers: []
	W1109 11:08:40.889490   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:40.889587   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:40.918532   37204 logs.go:274] 0 containers: []
	W1109 11:08:40.918547   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:40.918652   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:40.946726   37204 logs.go:274] 0 containers: []
	W1109 11:08:40.946738   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:40.946825   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:40.973633   37204 logs.go:274] 0 containers: []
	W1109 11:08:40.973645   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:40.973734   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:41.023803   37204 logs.go:274] 0 containers: []
	W1109 11:08:41.023817   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:41.023915   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:41.049357   37204 logs.go:274] 0 containers: []
	W1109 11:08:41.049370   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:41.049378   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:41.049386   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:41.090416   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:41.090436   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:41.103412   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:41.103430   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:41.165318   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:41.165330   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:41.165337   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:41.180543   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:41.180557   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:43.228390   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04783804s)
	I1109 11:08:45.730225   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:45.809717   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:45.832983   37204 logs.go:274] 0 containers: []
	W1109 11:08:45.832994   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:45.833073   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:45.860427   37204 logs.go:274] 0 containers: []
	W1109 11:08:45.860454   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:45.860557   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:45.891250   37204 logs.go:274] 0 containers: []
	W1109 11:08:45.891265   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:45.891355   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:45.919194   37204 logs.go:274] 0 containers: []
	W1109 11:08:45.919231   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:45.919315   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:45.943106   37204 logs.go:274] 0 containers: []
	W1109 11:08:45.943119   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:45.943207   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:45.965965   37204 logs.go:274] 0 containers: []
	W1109 11:08:45.965978   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:45.966065   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:45.988011   37204 logs.go:274] 0 containers: []
	W1109 11:08:45.988023   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:45.988108   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:46.013783   37204 logs.go:274] 0 containers: []
	W1109 11:08:46.013796   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:46.013804   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:46.013812   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:46.096332   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:46.096343   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:46.096349   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:46.110907   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:46.110921   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:48.159028   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048112893s)
	I1109 11:08:48.159142   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:48.159150   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:48.196780   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:48.196793   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:50.709558   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:50.810004   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:50.835402   37204 logs.go:274] 0 containers: []
	W1109 11:08:50.835415   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:50.835499   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:50.857159   37204 logs.go:274] 0 containers: []
	W1109 11:08:50.857171   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:50.857257   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:50.879398   37204 logs.go:274] 0 containers: []
	W1109 11:08:50.879410   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:50.879501   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:50.903323   37204 logs.go:274] 0 containers: []
	W1109 11:08:50.903336   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:50.903419   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:50.925624   37204 logs.go:274] 0 containers: []
	W1109 11:08:50.925635   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:50.925732   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:50.948314   37204 logs.go:274] 0 containers: []
	W1109 11:08:50.948326   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:50.948411   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:50.971245   37204 logs.go:274] 0 containers: []
	W1109 11:08:50.971257   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:50.971349   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:50.993673   37204 logs.go:274] 0 containers: []
	W1109 11:08:50.993686   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:50.993692   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:50.993700   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:08:51.033371   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:51.033389   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:51.046453   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:51.046467   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:51.103328   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:51.103341   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:51.103348   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:51.117469   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:51.117482   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:53.164697   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047220181s)
	I1109 11:08:55.667127   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:08:55.811856   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:08:55.836785   37204 logs.go:274] 0 containers: []
	W1109 11:08:55.836797   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:08:55.836879   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:08:55.860979   37204 logs.go:274] 0 containers: []
	W1109 11:08:55.860992   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:08:55.861075   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:08:55.882811   37204 logs.go:274] 0 containers: []
	W1109 11:08:55.882823   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:08:55.882912   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:08:55.906071   37204 logs.go:274] 0 containers: []
	W1109 11:08:55.906084   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:08:55.906165   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:08:55.935440   37204 logs.go:274] 0 containers: []
	W1109 11:08:55.935452   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:08:55.935536   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:08:55.957617   37204 logs.go:274] 0 containers: []
	W1109 11:08:55.957628   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:08:55.957710   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:08:55.980439   37204 logs.go:274] 0 containers: []
	W1109 11:08:55.980450   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:08:55.980530   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:08:56.002341   37204 logs.go:274] 0 containers: []
	W1109 11:08:56.002352   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:08:56.002359   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:08:56.002366   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:08:56.013713   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:08:56.013725   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:08:56.067619   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:08:56.067630   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:08:56.067637   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:08:56.081247   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:08:56.081258   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:08:58.129302   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048049089s)
	I1109 11:08:58.129422   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:08:58.129430   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:00.672042   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:00.809718   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:00.832880   37204 logs.go:274] 0 containers: []
	W1109 11:09:00.832892   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:00.832977   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:00.859256   37204 logs.go:274] 0 containers: []
	W1109 11:09:00.859266   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:00.859349   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:00.882345   37204 logs.go:274] 0 containers: []
	W1109 11:09:00.882357   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:00.882439   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:00.904099   37204 logs.go:274] 0 containers: []
	W1109 11:09:00.904111   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:00.904196   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:00.926247   37204 logs.go:274] 0 containers: []
	W1109 11:09:00.926259   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:00.926345   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:00.948589   37204 logs.go:274] 0 containers: []
	W1109 11:09:00.948602   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:00.948691   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:00.970027   37204 logs.go:274] 0 containers: []
	W1109 11:09:00.970039   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:00.970119   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:00.992107   37204 logs.go:274] 0 containers: []
	W1109 11:09:00.992118   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:00.992125   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:00.992133   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:01.032422   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:01.032436   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:01.044088   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:01.044101   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:01.097563   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:01.097578   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:01.097585   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:01.111712   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:01.111725   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:03.158363   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046642814s)
	I1109 11:09:05.658681   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:05.811364   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:05.838178   37204 logs.go:274] 0 containers: []
	W1109 11:09:05.838190   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:05.838278   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:05.861106   37204 logs.go:274] 0 containers: []
	W1109 11:09:05.861118   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:05.861198   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:05.884033   37204 logs.go:274] 0 containers: []
	W1109 11:09:05.884050   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:05.884154   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:05.906269   37204 logs.go:274] 0 containers: []
	W1109 11:09:05.906280   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:05.906364   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:05.929195   37204 logs.go:274] 0 containers: []
	W1109 11:09:05.929207   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:05.929287   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:05.954903   37204 logs.go:274] 0 containers: []
	W1109 11:09:05.954915   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:05.955021   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:05.977342   37204 logs.go:274] 0 containers: []
	W1109 11:09:05.977353   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:05.977436   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:05.999141   37204 logs.go:274] 0 containers: []
	W1109 11:09:05.999153   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:05.999159   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:05.999168   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:06.037901   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:06.037917   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:06.050924   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:06.050945   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:06.106571   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:06.106581   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:06.106589   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:06.120711   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:06.120723   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:08.167465   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.046746455s)
	I1109 11:09:10.667941   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:10.811214   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:10.836073   37204 logs.go:274] 0 containers: []
	W1109 11:09:10.836085   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:10.836169   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:10.860642   37204 logs.go:274] 0 containers: []
	W1109 11:09:10.860655   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:10.860760   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:10.883515   37204 logs.go:274] 0 containers: []
	W1109 11:09:10.883528   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:10.883611   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:10.905889   37204 logs.go:274] 0 containers: []
	W1109 11:09:10.905901   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:10.905984   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:10.929066   37204 logs.go:274] 0 containers: []
	W1109 11:09:10.929078   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:10.929163   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:10.951108   37204 logs.go:274] 0 containers: []
	W1109 11:09:10.951120   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:10.951200   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:10.973216   37204 logs.go:274] 0 containers: []
	W1109 11:09:10.973228   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:10.973315   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:10.998034   37204 logs.go:274] 0 containers: []
	W1109 11:09:10.998045   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:10.998052   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:10.998059   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:11.038864   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:11.038880   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:11.050809   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:11.050822   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:11.103961   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:11.103971   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:11.103984   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:11.117554   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:11.117566   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:13.165960   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048399378s)
	I1109 11:09:15.666290   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:15.809616   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:15.834186   37204 logs.go:274] 0 containers: []
	W1109 11:09:15.834199   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:15.834285   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:15.858109   37204 logs.go:274] 0 containers: []
	W1109 11:09:15.858120   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:15.858200   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:15.879613   37204 logs.go:274] 0 containers: []
	W1109 11:09:15.879625   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:15.879710   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:15.902828   37204 logs.go:274] 0 containers: []
	W1109 11:09:15.902839   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:15.902920   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:15.925082   37204 logs.go:274] 0 containers: []
	W1109 11:09:15.925094   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:15.925179   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:15.946920   37204 logs.go:274] 0 containers: []
	W1109 11:09:15.946932   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:15.947016   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:15.969410   37204 logs.go:274] 0 containers: []
	W1109 11:09:15.969422   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:15.969515   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:15.992576   37204 logs.go:274] 0 containers: []
	W1109 11:09:15.992588   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:15.992594   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:15.992602   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:16.004768   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:16.004788   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:16.065176   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:16.065187   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:16.065194   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:16.079519   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:16.079531   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:18.127885   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048360787s)
	I1109 11:09:18.127995   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:18.128004   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:20.668936   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:20.809681   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:20.834116   37204 logs.go:274] 0 containers: []
	W1109 11:09:20.834131   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:20.834222   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:20.859088   37204 logs.go:274] 0 containers: []
	W1109 11:09:20.859097   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:20.859180   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:20.880972   37204 logs.go:274] 0 containers: []
	W1109 11:09:20.880983   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:20.881066   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:20.903211   37204 logs.go:274] 0 containers: []
	W1109 11:09:20.903223   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:20.903304   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:20.925322   37204 logs.go:274] 0 containers: []
	W1109 11:09:20.925334   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:20.925414   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:20.948011   37204 logs.go:274] 0 containers: []
	W1109 11:09:20.948025   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:20.948110   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:20.971108   37204 logs.go:274] 0 containers: []
	W1109 11:09:20.971121   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:20.971205   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:20.993187   37204 logs.go:274] 0 containers: []
	W1109 11:09:20.993201   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:20.993218   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:20.993227   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:21.032251   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:21.032268   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:21.045245   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:21.045259   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:21.101609   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:21.101623   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:21.101630   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:21.115906   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:21.115917   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:23.164742   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048813686s)
	I1109 11:09:25.667065   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:25.809753   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:25.833612   37204 logs.go:274] 0 containers: []
	W1109 11:09:25.833624   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:25.833704   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:25.858011   37204 logs.go:274] 0 containers: []
	W1109 11:09:25.858023   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:25.858105   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:25.880031   37204 logs.go:274] 0 containers: []
	W1109 11:09:25.880043   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:25.880129   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:25.901910   37204 logs.go:274] 0 containers: []
	W1109 11:09:25.901921   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:25.902002   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:25.923409   37204 logs.go:274] 0 containers: []
	W1109 11:09:25.923422   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:25.923521   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:25.945754   37204 logs.go:274] 0 containers: []
	W1109 11:09:25.945766   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:25.945851   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:25.967286   37204 logs.go:274] 0 containers: []
	W1109 11:09:25.967298   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:25.967398   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:25.988746   37204 logs.go:274] 0 containers: []
	W1109 11:09:25.988757   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:25.988764   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:25.988772   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:26.028120   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:26.028133   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:26.039553   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:26.039569   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:26.094717   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:26.094728   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:26.094734   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:26.108473   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:26.108485   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:28.154493   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04601467s)
	I1109 11:09:30.655184   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:30.809422   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:30.832710   37204 logs.go:274] 0 containers: []
	W1109 11:09:30.832722   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:30.832806   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:30.858902   37204 logs.go:274] 0 containers: []
	W1109 11:09:30.858913   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:30.858998   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:30.881370   37204 logs.go:274] 0 containers: []
	W1109 11:09:30.881382   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:30.881464   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:30.904810   37204 logs.go:274] 0 containers: []
	W1109 11:09:30.904820   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:30.904900   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:30.927057   37204 logs.go:274] 0 containers: []
	W1109 11:09:30.927069   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:30.927152   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:30.949881   37204 logs.go:274] 0 containers: []
	W1109 11:09:30.949893   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:30.949975   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:30.972790   37204 logs.go:274] 0 containers: []
	W1109 11:09:30.972803   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:30.972886   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:30.996228   37204 logs.go:274] 0 containers: []
	W1109 11:09:30.996239   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:30.996246   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:30.996253   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:31.034954   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:31.034968   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:31.047456   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:31.047468   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:31.101304   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:31.101316   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:31.101322   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:31.115514   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:31.115526   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:33.167357   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.051837777s)
	I1109 11:09:35.668540   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:35.810067   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:35.834558   37204 logs.go:274] 0 containers: []
	W1109 11:09:35.834569   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:35.834650   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:35.860715   37204 logs.go:274] 0 containers: []
	W1109 11:09:35.860726   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:35.860807   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:35.883107   37204 logs.go:274] 0 containers: []
	W1109 11:09:35.883120   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:35.883209   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:35.905840   37204 logs.go:274] 0 containers: []
	W1109 11:09:35.905852   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:35.905947   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:35.927437   37204 logs.go:274] 0 containers: []
	W1109 11:09:35.927449   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:35.927531   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:35.949705   37204 logs.go:274] 0 containers: []
	W1109 11:09:35.949716   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:35.949801   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:35.973813   37204 logs.go:274] 0 containers: []
	W1109 11:09:35.973826   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:35.973913   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:35.996578   37204 logs.go:274] 0 containers: []
	W1109 11:09:35.996589   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:35.996595   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:35.996602   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:36.054369   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:36.054381   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:36.054388   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:36.069205   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:36.069219   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:38.114742   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.045528249s)
	I1109 11:09:38.114854   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:38.114862   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:38.152622   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:38.152636   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:40.666470   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:40.811199   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:40.834797   37204 logs.go:274] 0 containers: []
	W1109 11:09:40.834809   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:40.834901   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:40.858613   37204 logs.go:274] 0 containers: []
	W1109 11:09:40.858624   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:40.858707   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:40.881143   37204 logs.go:274] 0 containers: []
	W1109 11:09:40.881154   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:40.881237   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:40.903761   37204 logs.go:274] 0 containers: []
	W1109 11:09:40.903772   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:40.903853   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:40.925861   37204 logs.go:274] 0 containers: []
	W1109 11:09:40.925875   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:40.925969   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:40.947859   37204 logs.go:274] 0 containers: []
	W1109 11:09:40.947871   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:40.947958   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:40.970803   37204 logs.go:274] 0 containers: []
	W1109 11:09:40.970815   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:40.970898   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:40.994177   37204 logs.go:274] 0 containers: []
	W1109 11:09:40.994189   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:40.994196   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:40.994202   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:41.005848   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:41.005861   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:41.060000   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:41.060014   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:41.060020   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:41.074317   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:41.074328   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:43.122039   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.047717821s)
	I1109 11:09:43.122153   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:43.122160   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:45.661036   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:45.809226   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:45.832089   37204 logs.go:274] 0 containers: []
	W1109 11:09:45.832100   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:45.832184   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:45.855289   37204 logs.go:274] 0 containers: []
	W1109 11:09:45.855300   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:45.855393   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:45.878391   37204 logs.go:274] 0 containers: []
	W1109 11:09:45.878403   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:45.878484   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:45.900255   37204 logs.go:274] 0 containers: []
	W1109 11:09:45.900267   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:45.900349   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:45.923150   37204 logs.go:274] 0 containers: []
	W1109 11:09:45.923161   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:45.923243   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:45.944672   37204 logs.go:274] 0 containers: []
	W1109 11:09:45.944683   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:45.944771   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:45.966695   37204 logs.go:274] 0 containers: []
	W1109 11:09:45.966706   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:45.966804   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:45.989163   37204 logs.go:274] 0 containers: []
	W1109 11:09:45.989175   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:45.989182   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:45.989189   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:46.041964   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:46.041999   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:46.042006   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:46.055907   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:46.055920   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:48.104447   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048531601s)
	I1109 11:09:48.104559   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:48.104566   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:48.142057   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:48.142070   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:50.655661   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:50.810676   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:50.834105   37204 logs.go:274] 0 containers: []
	W1109 11:09:50.834118   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:50.834225   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:50.857631   37204 logs.go:274] 0 containers: []
	W1109 11:09:50.857645   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:50.857749   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:50.880502   37204 logs.go:274] 0 containers: []
	W1109 11:09:50.880513   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:50.880596   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:50.906522   37204 logs.go:274] 0 containers: []
	W1109 11:09:50.906534   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:50.906615   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:50.934990   37204 logs.go:274] 0 containers: []
	W1109 11:09:50.935004   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:50.935088   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:50.957093   37204 logs.go:274] 0 containers: []
	W1109 11:09:50.957113   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:50.957204   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:50.980302   37204 logs.go:274] 0 containers: []
	W1109 11:09:50.980314   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:50.980397   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:51.002824   37204 logs.go:274] 0 containers: []
	W1109 11:09:51.002839   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:51.002851   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:51.002861   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:51.059149   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:51.059160   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:51.059167   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:51.073196   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:51.073209   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:53.123223   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.050020434s)
	I1109 11:09:53.123335   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:53.123343   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:53.160836   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:53.160848   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:55.673532   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:09:55.811250   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:09:55.837845   37204 logs.go:274] 0 containers: []
	W1109 11:09:55.837856   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:09:55.837943   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:09:55.863890   37204 logs.go:274] 0 containers: []
	W1109 11:09:55.863901   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:09:55.863969   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:09:55.887295   37204 logs.go:274] 0 containers: []
	W1109 11:09:55.887307   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:09:55.887376   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:09:55.913152   37204 logs.go:274] 0 containers: []
	W1109 11:09:55.913165   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:09:55.913248   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:09:55.937564   37204 logs.go:274] 0 containers: []
	W1109 11:09:55.937575   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:09:55.937657   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:09:55.962335   37204 logs.go:274] 0 containers: []
	W1109 11:09:55.962346   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:09:55.962442   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:09:55.985093   37204 logs.go:274] 0 containers: []
	W1109 11:09:55.985106   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:09:55.985188   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:09:56.008460   37204 logs.go:274] 0 containers: []
	W1109 11:09:56.008471   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:09:56.008479   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:09:56.008487   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:09:56.049170   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:09:56.049182   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:09:56.060927   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:09:56.060939   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:09:56.113963   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:09:56.113974   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:09:56.113982   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:09:56.127727   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:09:56.127739   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:09:58.176964   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.049231143s)
	I1109 11:10:00.677307   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:10:00.809360   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:10:00.833300   37204 logs.go:274] 0 containers: []
	W1109 11:10:00.833312   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:10:00.833397   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:10:00.855717   37204 logs.go:274] 0 containers: []
	W1109 11:10:00.855728   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:10:00.855817   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:10:00.878529   37204 logs.go:274] 0 containers: []
	W1109 11:10:00.878541   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:10:00.878626   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:10:00.900758   37204 logs.go:274] 0 containers: []
	W1109 11:10:00.900770   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:10:00.900852   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:10:00.923210   37204 logs.go:274] 0 containers: []
	W1109 11:10:00.923221   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:10:00.923303   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:10:00.945508   37204 logs.go:274] 0 containers: []
	W1109 11:10:00.945519   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:10:00.945604   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:10:00.968208   37204 logs.go:274] 0 containers: []
	W1109 11:10:00.968218   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:10:00.968298   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:10:00.990653   37204 logs.go:274] 0 containers: []
	W1109 11:10:00.990664   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:10:00.990671   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:10:00.990677   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:10:01.028068   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:10:01.028081   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:10:01.039310   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:10:01.039322   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:10:01.094006   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:10:01.094015   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:10:01.094022   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:10:01.108174   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:10:01.108186   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:10:03.156293   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.04811402s)
	I1109 11:10:05.658703   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:10:05.809088   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:10:05.834896   37204 logs.go:274] 0 containers: []
	W1109 11:10:05.834909   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:10:05.834992   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:10:05.861136   37204 logs.go:274] 0 containers: []
	W1109 11:10:05.861148   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:10:05.861232   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:10:05.883423   37204 logs.go:274] 0 containers: []
	W1109 11:10:05.883434   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:10:05.883516   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:10:05.906045   37204 logs.go:274] 0 containers: []
	W1109 11:10:05.906057   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:10:05.906142   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:10:05.929493   37204 logs.go:274] 0 containers: []
	W1109 11:10:05.929504   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:10:05.929593   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:10:05.951351   37204 logs.go:274] 0 containers: []
	W1109 11:10:05.951363   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:10:05.951444   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:10:05.972948   37204 logs.go:274] 0 containers: []
	W1109 11:10:05.972960   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:10:05.973042   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:10:05.996627   37204 logs.go:274] 0 containers: []
	W1109 11:10:05.996638   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:10:05.996645   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:10:05.996651   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:10:06.010788   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:10:06.010801   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:10:08.063207   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.052413023s)
	I1109 11:10:08.063318   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:10:08.063325   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:10:08.101014   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:10:08.101026   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1109 11:10:08.112977   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:10:08.112990   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:10:08.166403   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:10:10.667362   37204 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:10:10.810993   37204 kubeadm.go:631] restartCluster took 4m4.22758552s
	W1109 11:10:10.811166   37204 out.go:239] ! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	! Unable to restart cluster, will reset it: apiserver healthz: apiserver process never appeared
	I1109 11:10:10.811195   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1109 11:10:11.232176   37204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 11:10:11.241789   37204 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 11:10:11.249323   37204 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1109 11:10:11.249382   37204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 11:10:11.256727   37204 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 11:10:11.256754   37204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 11:10:11.301172   37204 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1109 11:10:11.301235   37204 kubeadm.go:317] [preflight] Running pre-flight checks
	I1109 11:10:11.609240   37204 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 11:10:11.609351   37204 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 11:10:11.609459   37204 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 11:10:11.830149   37204 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 11:10:11.831596   37204 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 11:10:11.838669   37204 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1109 11:10:11.913597   37204 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 11:10:11.934968   37204 out.go:204]   - Generating certificates and keys ...
	I1109 11:10:11.935032   37204 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1109 11:10:11.935077   37204 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1109 11:10:11.935138   37204 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1109 11:10:11.935206   37204 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1109 11:10:11.935297   37204 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1109 11:10:11.935347   37204 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1109 11:10:11.935397   37204 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1109 11:10:11.935448   37204 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1109 11:10:11.935504   37204 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1109 11:10:11.935564   37204 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1109 11:10:11.935602   37204 kubeadm.go:317] [certs] Using the existing "sa" key
	I1109 11:10:11.935656   37204 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 11:10:12.121987   37204 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 11:10:12.237280   37204 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 11:10:12.514199   37204 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 11:10:12.771018   37204 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 11:10:12.772210   37204 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 11:10:12.793877   37204 out.go:204]   - Booting up control plane ...
	I1109 11:10:12.793975   37204 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 11:10:12.794044   37204 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 11:10:12.794116   37204 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 11:10:12.794178   37204 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 11:10:12.794313   37204 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 11:10:52.788208   37204 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1109 11:10:52.789055   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:10:52.789299   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:10:57.790602   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:10:57.790918   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:11:07.788007   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:11:07.788189   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:11:27.779562   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:11:27.779771   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:12:07.752501   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:12:07.752675   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:12:07.752688   37204 kubeadm.go:317] 
	I1109 11:12:07.752721   37204 kubeadm.go:317] Unfortunately, an error has occurred:
	I1109 11:12:07.752773   37204 kubeadm.go:317] 	timed out waiting for the condition
	I1109 11:12:07.752797   37204 kubeadm.go:317] 
	I1109 11:12:07.752832   37204 kubeadm.go:317] This error is likely caused by:
	I1109 11:12:07.752872   37204 kubeadm.go:317] 	- The kubelet is not running
	I1109 11:12:07.752958   37204 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 11:12:07.752966   37204 kubeadm.go:317] 
	I1109 11:12:07.753062   37204 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 11:12:07.753090   37204 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1109 11:12:07.753116   37204 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1109 11:12:07.753126   37204 kubeadm.go:317] 
	I1109 11:12:07.753206   37204 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 11:12:07.753282   37204 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1109 11:12:07.753364   37204 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1109 11:12:07.753407   37204 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1109 11:12:07.753478   37204 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1109 11:12:07.753510   37204 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1109 11:12:07.756123   37204 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1109 11:12:07.756236   37204 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1109 11:12:07.756335   37204 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 11:12:07.756408   37204 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 11:12:07.756481   37204 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	W1109 11:12:07.756606   37204 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1109 11:12:07.756645   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm reset --cri-socket /var/run/dockershim.sock --force"
	I1109 11:12:08.172873   37204 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 11:12:08.182786   37204 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1109 11:12:08.182849   37204 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 11:12:08.190574   37204 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 11:12:08.190602   37204 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 11:12:08.236224   37204 kubeadm.go:317] [init] Using Kubernetes version: v1.16.0
	I1109 11:12:08.236480   37204 kubeadm.go:317] [preflight] Running pre-flight checks
	I1109 11:12:08.535642   37204 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 11:12:08.535725   37204 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 11:12:08.535810   37204 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 11:12:08.755564   37204 kubeadm.go:317] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1109 11:12:08.757113   37204 kubeadm.go:317] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1109 11:12:08.763729   37204 kubeadm.go:317] [kubelet-start] Activating the kubelet service
	I1109 11:12:08.827601   37204 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 11:12:08.849212   37204 out.go:204]   - Generating certificates and keys ...
	I1109 11:12:08.849310   37204 kubeadm.go:317] [certs] Using existing ca certificate authority
	I1109 11:12:08.849370   37204 kubeadm.go:317] [certs] Using existing apiserver certificate and key on disk
	I1109 11:12:08.849441   37204 kubeadm.go:317] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1109 11:12:08.849533   37204 kubeadm.go:317] [certs] Using existing front-proxy-ca certificate authority
	I1109 11:12:08.849605   37204 kubeadm.go:317] [certs] Using existing front-proxy-client certificate and key on disk
	I1109 11:12:08.849645   37204 kubeadm.go:317] [certs] Using existing etcd/ca certificate authority
	I1109 11:12:08.849689   37204 kubeadm.go:317] [certs] Using existing etcd/server certificate and key on disk
	I1109 11:12:08.849727   37204 kubeadm.go:317] [certs] Using existing etcd/peer certificate and key on disk
	I1109 11:12:08.849789   37204 kubeadm.go:317] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1109 11:12:08.849866   37204 kubeadm.go:317] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1109 11:12:08.849897   37204 kubeadm.go:317] [certs] Using the existing "sa" key
	I1109 11:12:08.849949   37204 kubeadm.go:317] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1109 11:12:09.015652   37204 kubeadm.go:317] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1109 11:12:09.494618   37204 kubeadm.go:317] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1109 11:12:09.566734   37204 kubeadm.go:317] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1109 11:12:09.688192   37204 kubeadm.go:317] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1109 11:12:09.688734   37204 kubeadm.go:317] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1109 11:12:09.710265   37204 out.go:204]   - Booting up control plane ...
	I1109 11:12:09.710437   37204 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1109 11:12:09.710601   37204 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1109 11:12:09.710700   37204 kubeadm.go:317] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1109 11:12:09.710877   37204 kubeadm.go:317] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1109 11:12:09.711122   37204 kubeadm.go:317] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1109 11:12:49.667700   37204 kubeadm.go:317] [kubelet-check] Initial timeout of 40s passed.
	I1109 11:12:49.668168   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:12:49.668320   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:12:54.666602   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:12:54.666809   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:13:04.659605   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:13:04.659815   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:13:24.645846   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:13:24.646013   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:14:04.617289   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:14:04.617457   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:14:04.617480   37204 kubeadm.go:317] 
	I1109 11:14:04.617509   37204 kubeadm.go:317] Unfortunately, an error has occurred:
	I1109 11:14:04.617550   37204 kubeadm.go:317] 	timed out waiting for the condition
	I1109 11:14:04.617558   37204 kubeadm.go:317] 
	I1109 11:14:04.617619   37204 kubeadm.go:317] This error is likely caused by:
	I1109 11:14:04.617649   37204 kubeadm.go:317] 	- The kubelet is not running
	I1109 11:14:04.617733   37204 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 11:14:04.617744   37204 kubeadm.go:317] 
	I1109 11:14:04.617819   37204 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 11:14:04.617846   37204 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1109 11:14:04.617868   37204 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1109 11:14:04.617874   37204 kubeadm.go:317] 
	I1109 11:14:04.617957   37204 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 11:14:04.618035   37204 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1109 11:14:04.618102   37204 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1109 11:14:04.618139   37204 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1109 11:14:04.618191   37204 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1109 11:14:04.618215   37204 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1109 11:14:04.620908   37204 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1109 11:14:04.621018   37204 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1109 11:14:04.621107   37204 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 11:14:04.621192   37204 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 11:14:04.621253   37204 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1109 11:14:04.621273   37204 kubeadm.go:398] StartCluster complete in 7m58.02267343s
	I1109 11:14:04.621369   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:14:04.645035   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.645047   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:14:04.645132   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:14:04.667780   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.667793   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:14:04.667880   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:14:04.692026   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.692041   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:14:04.692132   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:14:04.714003   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.714016   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:14:04.714101   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:14:04.739118   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.739132   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:14:04.739217   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:14:04.762684   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.762695   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:14:04.762781   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:14:04.786482   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.786493   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:14:04.786587   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:14:04.810343   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.810354   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:14:04.810360   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:14:04.810367   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:14:04.875222   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:14:04.875232   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:14:04.875239   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:14:04.890599   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:14:04.890615   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:14:06.939039   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048428933s)
	I1109 11:14:06.939196   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:14:06.939204   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:14:06.979307   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:14:06.979320   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1109 11:14:06.991014   37204 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1109 11:14:06.991038   37204 out.go:239] * 
	* 
	W1109 11:14:06.991161   37204 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 11:14:06.991175   37204 out.go:239] * 
	* 
	W1109 11:14:06.991800   37204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 11:14:07.061894   37204 out.go:177] 
	W1109 11:14:07.103353   37204 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 11:14:07.103414   37204 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1109 11:14:07.103447   37204 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1109 11:14:07.145336   37204 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-darwin-amd64 start -p old-k8s-version-110019 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --kubernetes-version=v1.16.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-110019
helpers_test.go:235: (dbg) docker inspect old-k8s-version-110019:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961",
	        "Created": "2022-11-09T19:00:25.764137036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280606,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T19:06:03.12299926Z",
	            "FinishedAt": "2022-11-09T19:06:00.201300966Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hostname",
	        "HostsPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hosts",
	        "LogPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961-json.log",
	        "Name": "/old-k8s-version-110019",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-110019:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-110019",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-110019",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-110019/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-110019",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "045b0c7f7b825a9537dd5e9af2fae17397ce41b99df276c464f08a1c8dd05584",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65164"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65166"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/045b0c7f7b82",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-110019": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "179e62f50506",
	                        "old-k8s-version-110019"
	                    ],
	                    "NetworkID": "70a1b44058ab5d3fa2f8c48ca78ea76e689efbb2630885d7458319462051756b",
	                    "EndpointID": "306574ce40dd95945d1d7c9e7051a4cf459e90d59c42b7a6c013c989c23ad2d6",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 2 (398.984267ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-110019 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-110019 logs -n 25: (3.51831041s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| addons  | enable metrics-server -p no-preload-110035        | no-preload-110035            | jenkins | v1.28.0 | 09 Nov 22 11:01 PST | 09 Nov 22 11:01 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p no-preload-110035                              | no-preload-110035            | jenkins | v1.28.0 | 09 Nov 22 11:01 PST | 09 Nov 22 11:01 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-110035             | no-preload-110035            | jenkins | v1.28.0 | 09 Nov 22 11:01 PST | 09 Nov 22 11:01 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-110035                              | no-preload-110035            | jenkins | v1.28.0 | 09 Nov 22 11:01 PST | 09 Nov 22 11:06 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr                                 |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                       |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-110019   | old-k8s-version-110019       | jenkins | v1.28.0 | 09 Nov 22 11:04 PST |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-110019                         | old-k8s-version-110019       | jenkins | v1.28.0 | 09 Nov 22 11:05 PST | 09 Nov 22 11:06 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-110019        | old-k8s-version-110019       | jenkins | v1.28.0 | 09 Nov 22 11:06 PST | 09 Nov 22 11:06 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-110019                         | old-k8s-version-110019       | jenkins | v1.28.0 | 09 Nov 22 11:06 PST |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --kvm-network=default                             |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                     |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                           |                              |         |         |                     |                     |
	|         | --keep-context=false                              |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                      |                              |         |         |                     |                     |
	| ssh     | -p no-preload-110035 sudo                         | no-preload-110035            | jenkins | v1.28.0 | 09 Nov 22 11:07 PST | 09 Nov 22 11:07 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p no-preload-110035                              | no-preload-110035            | jenkins | v1.28.0 | 09 Nov 22 11:07 PST | 09 Nov 22 11:07 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p no-preload-110035                              | no-preload-110035            | jenkins | v1.28.0 | 09 Nov 22 11:07 PST | 09 Nov 22 11:07 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p no-preload-110035                              | no-preload-110035            | jenkins | v1.28.0 | 09 Nov 22 11:07 PST | 09 Nov 22 11:07 PST |
	| delete  | -p no-preload-110035                              | no-preload-110035            | jenkins | v1.28.0 | 09 Nov 22 11:07 PST | 09 Nov 22 11:07 PST |
	| start   | -p embed-certs-110722                             | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:07 PST | 09 Nov 22 11:08 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-110722       | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:08 PST | 09 Nov 22 11:08 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain            |                              |         |         |                     |                     |
	| stop    | -p embed-certs-110722                             | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:08 PST | 09 Nov 22 11:08 PST |
	|         | --alsologtostderr -v=3                            |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-110722            | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:08 PST | 09 Nov 22 11:08 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-110722                             | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:08 PST | 09 Nov 22 11:13 PST |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --embed-certs --driver=docker                     |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	| ssh     | -p embed-certs-110722 sudo                        | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	|         | crictl images -o json                             |                              |         |         |                     |                     |
	| pause   | -p embed-certs-110722                             | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| unpause | -p embed-certs-110722                             | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	|         | --alsologtostderr -v=1                            |                              |         |         |                     |                     |
	| delete  | -p embed-certs-110722                             | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	| delete  | -p embed-certs-110722                             | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	| delete  | -p                                                | disable-driver-mounts-111353 | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	|         | disable-driver-mounts-111353                      |                              |         |         |                     |                     |
	| start   | -p                                                | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:13 PST |                     |
	|         | default-k8s-diff-port-111353                      |                              |         |         |                     |                     |
	|         | --memory=2200                                     |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                     |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                             |                              |         |         |                     |                     |
	|         | --driver=docker                                   |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                      |                              |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/09 11:13:53
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 11:13:53.659129   38216 out.go:296] Setting OutFile to fd 1 ...
	I1109 11:13:53.659316   38216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 11:13:53.659321   38216 out.go:309] Setting ErrFile to fd 2...
	I1109 11:13:53.659325   38216 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 11:13:53.659437   38216 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 11:13:53.660061   38216 out.go:303] Setting JSON to false
	I1109 11:13:53.679610   38216 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":15208,"bootTime":1668006025,"procs":385,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 11:13:53.679697   38216 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 11:13:53.701678   38216 out.go:177] * [default-k8s-diff-port-111353] minikube v1.28.0 on Darwin 13.0
	I1109 11:13:53.745665   38216 notify.go:220] Checking for updates...
	I1109 11:13:53.767391   38216 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 11:13:53.789484   38216 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 11:13:53.811321   38216 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 11:13:53.832354   38216 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 11:13:53.854250   38216 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 11:13:53.876058   38216 config.go:180] Loaded profile config "old-k8s-version-110019": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	I1109 11:13:53.876139   38216 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 11:13:53.938783   38216 docker.go:137] docker version: linux-20.10.20
	I1109 11:13:53.938930   38216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 11:13:54.079929   38216 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 19:13:53.994552002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 11:13:54.122337   38216 out.go:177] * Using the docker driver based on user configuration
	I1109 11:13:54.143457   38216 start.go:282] selected driver: docker
	I1109 11:13:54.143513   38216 start.go:808] validating driver "docker" against <nil>
	I1109 11:13:54.143541   38216 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 11:13:54.147585   38216 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 11:13:54.288282   38216 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 19:13:54.204073976 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 11:13:54.288402   38216 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1109 11:13:54.288535   38216 start_flags.go:901] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1109 11:13:54.309702   38216 out.go:177] * Using Docker Desktop driver with root privileges
	I1109 11:13:54.331504   38216 cni.go:95] Creating CNI manager for ""
	I1109 11:13:54.331538   38216 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:13:54.331554   38216 start_flags.go:317] config:
	{Name:default-k8s-diff-port-111353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-111353 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:do
cker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:13:54.353385   38216 out.go:177] * Starting control plane node default-k8s-diff-port-111353 in cluster default-k8s-diff-port-111353
	I1109 11:13:54.395462   38216 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 11:13:54.416445   38216 out.go:177] * Pulling base image ...
	I1109 11:13:54.437597   38216 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 11:13:54.437621   38216 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 11:13:54.437686   38216 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1109 11:13:54.437710   38216 cache.go:57] Caching tarball of preloaded images
	I1109 11:13:54.437949   38216 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 11:13:54.437967   38216 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1109 11:13:54.438988   38216 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/config.json ...
	I1109 11:13:54.439130   38216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/config.json: {Name:mkdf3334ea522af10424c0471b5d53ff8a9d890d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:13:54.493671   38216 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 11:13:54.493702   38216 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 11:13:54.493713   38216 cache.go:208] Successfully downloaded all kic artifacts
	I1109 11:13:54.493774   38216 start.go:364] acquiring machines lock for default-k8s-diff-port-111353: {Name:mka727bc1d82b1049c3386da4be37a81e7185cb9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 11:13:54.493936   38216 start.go:368] acquired machines lock for "default-k8s-diff-port-111353" in 148.198µs
	I1109 11:13:54.493969   38216 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-111353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-111353 Namespace:default APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disabl
eMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet} &{Name: IP: Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1109 11:13:54.494023   38216 start.go:125] createHost starting for "" (driver="docker")
	I1109 11:13:54.539063   38216 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I1109 11:13:54.539473   38216 start.go:159] libmachine.API.Create for "default-k8s-diff-port-111353" (driver="docker")
	I1109 11:13:54.539521   38216 client.go:168] LocalClient.Create starting
	I1109 11:13:54.539714   38216 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem
	I1109 11:13:54.539799   38216 main.go:134] libmachine: Decoding PEM data...
	I1109 11:13:54.539837   38216 main.go:134] libmachine: Parsing certificate...
	I1109 11:13:54.539946   38216 main.go:134] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem
	I1109 11:13:54.540017   38216 main.go:134] libmachine: Decoding PEM data...
	I1109 11:13:54.540034   38216 main.go:134] libmachine: Parsing certificate...
	I1109 11:13:54.540960   38216 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-111353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1109 11:13:54.597306   38216 cli_runner.go:211] docker network inspect default-k8s-diff-port-111353 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1109 11:13:54.597408   38216 network_create.go:272] running [docker network inspect default-k8s-diff-port-111353] to gather additional debugging logs...
	I1109 11:13:54.597426   38216 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-111353
	W1109 11:13:54.651761   38216 cli_runner.go:211] docker network inspect default-k8s-diff-port-111353 returned with exit code 1
	I1109 11:13:54.651785   38216 network_create.go:275] error running [docker network inspect default-k8s-diff-port-111353]: docker network inspect default-k8s-diff-port-111353: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: default-k8s-diff-port-111353
	I1109 11:13:54.651802   38216 network_create.go:277] output of [docker network inspect default-k8s-diff-port-111353]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: default-k8s-diff-port-111353
	
	** /stderr **
	I1109 11:13:54.651911   38216 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1109 11:13:54.706749   38216 network.go:295] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc0003e0818] misses:0}
	I1109 11:13:54.706788   38216 network.go:241] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:13:54.706803   38216 network_create.go:115] attempt to create docker network default-k8s-diff-port-111353 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1109 11:13:54.706896   38216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-111353 default-k8s-diff-port-111353
	W1109 11:13:54.760583   38216 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-111353 default-k8s-diff-port-111353 returned with exit code 1
	W1109 11:13:54.760633   38216 network_create.go:107] failed to create docker network default-k8s-diff-port-111353 192.168.49.0/24, will retry: subnet is taken
	I1109 11:13:54.760889   38216 network.go:286] skipping subnet 192.168.49.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003e0818] amended:false}} dirty:map[] misses:0}
	I1109 11:13:54.760906   38216 network.go:244] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:13:54.761117   38216 network.go:295] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003e0818] amended:true}} dirty:map[192.168.49.0:0xc0003e0818 192.168.58.0:0xc0008364d8] misses:0}
	I1109 11:13:54.761132   38216 network.go:241] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:13:54.761141   38216 network_create.go:115] attempt to create docker network default-k8s-diff-port-111353 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I1109 11:13:54.761234   38216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-111353 default-k8s-diff-port-111353
	W1109 11:13:54.815655   38216 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-111353 default-k8s-diff-port-111353 returned with exit code 1
	W1109 11:13:54.815693   38216 network_create.go:107] failed to create docker network default-k8s-diff-port-111353 192.168.58.0/24, will retry: subnet is taken
	I1109 11:13:54.815944   38216 network.go:286] skipping subnet 192.168.58.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003e0818] amended:true}} dirty:map[192.168.49.0:0xc0003e0818 192.168.58.0:0xc0008364d8] misses:1}
	I1109 11:13:54.815961   38216 network.go:244] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:13:54.816759   38216 network.go:295] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003e0818] amended:true}} dirty:map[192.168.49.0:0xc0003e0818 192.168.58.0:0xc0008364d8 192.168.67.0:0xc000c382f8] misses:1}
	I1109 11:13:54.816816   38216 network.go:241] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:13:54.816834   38216 network_create.go:115] attempt to create docker network default-k8s-diff-port-111353 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1109 11:13:54.816984   38216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-111353 default-k8s-diff-port-111353
	W1109 11:13:54.871428   38216 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-111353 default-k8s-diff-port-111353 returned with exit code 1
	W1109 11:13:54.871465   38216 network_create.go:107] failed to create docker network default-k8s-diff-port-111353 192.168.67.0/24, will retry: subnet is taken
	I1109 11:13:54.871718   38216 network.go:286] skipping subnet 192.168.67.0 that has unexpired reservation: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003e0818] amended:true}} dirty:map[192.168.49.0:0xc0003e0818 192.168.58.0:0xc0008364d8 192.168.67.0:0xc000c382f8] misses:2}
	I1109 11:13:54.871736   38216 network.go:244] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:13:54.871936   38216 network.go:295] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[192.168.49.0:0xc0003e0818] amended:true}} dirty:map[192.168.49.0:0xc0003e0818 192.168.58.0:0xc0008364d8 192.168.67.0:0xc000c382f8 192.168.76.0:0xc000836000] misses:2}
	I1109 11:13:54.871957   38216 network.go:241] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I1109 11:13:54.871963   38216 network_create.go:115] attempt to create docker network default-k8s-diff-port-111353 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1109 11:13:54.872045   38216 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-111353 default-k8s-diff-port-111353
	I1109 11:13:54.958675   38216 network_create.go:99] docker network default-k8s-diff-port-111353 192.168.76.0/24 created
	I1109 11:13:54.958720   38216 kic.go:106] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-111353" container
	I1109 11:13:54.958864   38216 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1109 11:13:55.014110   38216 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-111353 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-111353 --label created_by.minikube.sigs.k8s.io=true
	I1109 11:13:55.068353   38216 oci.go:103] Successfully created a docker volume default-k8s-diff-port-111353
	I1109 11:13:55.068493   38216 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-111353-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-111353 --entrypoint /usr/bin/test -v default-k8s-diff-port-111353:/var gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -d /var/lib
	I1109 11:13:55.513147   38216 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-111353
	I1109 11:13:55.513182   38216 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 11:13:55.513197   38216 kic.go:179] Starting extracting preloaded images to volume ...
	I1109 11:13:55.513324   38216 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-111353:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir
	I1109 11:13:59.967624   38216 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-111353:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 -I lz4 -xf /preloaded.tar -C /extractDir: (4.454270632s)
	I1109 11:13:59.967646   38216 kic.go:188] duration metric: took 4.454491 seconds to extract preloaded images to volume
	I1109 11:13:59.967776   38216 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1109 11:14:00.109585   38216 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-111353 --name default-k8s-diff-port-111353 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-111353 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-111353 --network default-k8s-diff-port-111353 --ip 192.168.76.2 --volume default-k8s-diff-port-111353:/var --security-opt apparmor=unconfined --memory=2200mb --memory-swap=2200mb --cpus=2 -e container=docker --expose 8444 --publish=8444 --publish=22 --publish=2376 --publish=5000 --publish=32443 gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456
	I1109 11:14:00.451829   38216 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-111353 --format={{.State.Running}}
	I1109 11:14:00.512504   38216 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-111353 --format={{.State.Status}}
	I1109 11:14:00.572264   38216 cli_runner.go:164] Run: docker exec default-k8s-diff-port-111353 stat /var/lib/dpkg/alternatives/iptables
	I1109 11:14:00.687503   38216 oci.go:144] the created container "default-k8s-diff-port-111353" has a running status.
	I1109 11:14:00.687533   38216 kic.go:210] Creating ssh key for kic: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/default-k8s-diff-port-111353/id_rsa...
	I1109 11:14:00.906448   38216 kic_runner.go:191] docker (temp): /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/default-k8s-diff-port-111353/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1109 11:14:01.011346   38216 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-111353 --format={{.State.Status}}
	I1109 11:14:01.067692   38216 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1109 11:14:01.067714   38216 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-111353 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1109 11:14:01.164378   38216 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-111353 --format={{.State.Status}}
	I1109 11:14:01.220536   38216 machine.go:88] provisioning docker machine ...
	I1109 11:14:01.220581   38216 ubuntu.go:169] provisioning hostname "default-k8s-diff-port-111353"
	I1109 11:14:01.220688   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:01.276706   38216 main.go:134] libmachine: Using SSH client type: native
	I1109 11:14:01.276897   38216 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 65419 <nil> <nil>}
	I1109 11:14:01.276912   38216 main.go:134] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-111353 && echo "default-k8s-diff-port-111353" | sudo tee /etc/hostname
	I1109 11:14:01.402323   38216 main.go:134] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-111353
	
	I1109 11:14:01.402431   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:01.461112   38216 main.go:134] libmachine: Using SSH client type: native
	I1109 11:14:01.461296   38216 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 65419 <nil> <nil>}
	I1109 11:14:01.461312   38216 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-111353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-111353/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-111353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 11:14:01.579366   38216 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 11:14:01.579384   38216 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 11:14:01.579404   38216 ubuntu.go:177] setting up certificates
	I1109 11:14:01.579423   38216 provision.go:83] configureAuth start
	I1109 11:14:01.579513   38216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-111353
	I1109 11:14:01.637504   38216 provision.go:138] copyHostCerts
	I1109 11:14:01.637616   38216 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 11:14:01.637625   38216 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 11:14:01.637732   38216 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 11:14:01.637933   38216 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 11:14:01.637939   38216 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 11:14:01.638010   38216 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 11:14:01.638172   38216 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 11:14:01.638178   38216 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 11:14:01.638243   38216 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 11:14:01.638369   38216 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-111353 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube default-k8s-diff-port-111353]
	I1109 11:14:01.916308   38216 provision.go:172] copyRemoteCerts
	I1109 11:14:01.916382   38216 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 11:14:01.916444   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:01.974044   38216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65419 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/default-k8s-diff-port-111353/id_rsa Username:docker}
	I1109 11:14:02.061570   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 11:14:02.079302   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I1109 11:14:02.096214   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1109 11:14:02.113380   38216 provision.go:86] duration metric: configureAuth took 533.947899ms
	I1109 11:14:02.113394   38216 ubuntu.go:193] setting minikube options for container-runtime
	I1109 11:14:02.113549   38216 config.go:180] Loaded profile config "default-k8s-diff-port-111353": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 11:14:02.113652   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:02.171276   38216 main.go:134] libmachine: Using SSH client type: native
	I1109 11:14:02.171423   38216 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 65419 <nil> <nil>}
	I1109 11:14:02.171441   38216 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 11:14:02.289713   38216 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 11:14:02.289724   38216 ubuntu.go:71] root file system type: overlay
	I1109 11:14:02.289841   38216 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 11:14:02.289957   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:02.347183   38216 main.go:134] libmachine: Using SSH client type: native
	I1109 11:14:02.347891   38216 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 65419 <nil> <nil>}
	I1109 11:14:02.348045   38216 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 11:14:02.478260   38216 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 11:14:02.478369   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:02.535614   38216 main.go:134] libmachine: Using SSH client type: native
	I1109 11:14:02.535775   38216 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 65419 <nil> <nil>}
	I1109 11:14:02.535788   38216 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 11:14:03.097723   38216 main.go:134] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2022-10-18 18:18:12.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2022-11-09 19:14:02.488500088 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1109 11:14:03.097745   38216 machine.go:91] provisioned docker machine in 1.877206645s
	I1109 11:14:03.097751   38216 client.go:171] LocalClient.Create took 8.558302345s
	I1109 11:14:03.097767   38216 start.go:167] duration metric: libmachine.API.Create for "default-k8s-diff-port-111353" took 8.558379554s
	I1109 11:14:03.097778   38216 start.go:300] post-start starting for "default-k8s-diff-port-111353" (driver="docker")
	I1109 11:14:03.097791   38216 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 11:14:03.097882   38216 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 11:14:03.097951   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:03.155435   38216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65419 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/default-k8s-diff-port-111353/id_rsa Username:docker}
	I1109 11:14:03.246241   38216 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 11:14:03.249807   38216 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 11:14:03.249823   38216 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 11:14:03.249831   38216 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 11:14:03.249836   38216 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 11:14:03.249847   38216 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 11:14:03.249948   38216 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 11:14:03.250133   38216 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 11:14:03.250338   38216 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 11:14:03.257420   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 11:14:03.274010   38216 start.go:303] post-start completed in 176.224035ms
	I1109 11:14:03.274552   38216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-111353
	I1109 11:14:03.331103   38216 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/config.json ...
	I1109 11:14:03.331576   38216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 11:14:03.331646   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:03.388130   38216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65419 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/default-k8s-diff-port-111353/id_rsa Username:docker}
	I1109 11:14:03.471680   38216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 11:14:03.476399   38216 start.go:128] duration metric: createHost completed in 8.982450867s
	I1109 11:14:03.476414   38216 start.go:83] releasing machines lock for "default-k8s-diff-port-111353", held for 8.982552285s
	I1109 11:14:03.476499   38216 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-111353
	I1109 11:14:03.533176   38216 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 11:14:03.533183   38216 ssh_runner.go:195] Run: systemctl --version
	I1109 11:14:03.533267   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:03.533265   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:03.596874   38216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65419 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/default-k8s-diff-port-111353/id_rsa Username:docker}
	I1109 11:14:03.597072   38216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:65419 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/default-k8s-diff-port-111353/id_rsa Username:docker}
	I1109 11:14:03.736574   38216 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 11:14:03.746973   38216 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 11:14:03.747044   38216 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 11:14:03.756272   38216 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 11:14:03.769113   38216 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 11:14:03.830574   38216 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 11:14:03.895373   38216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 11:14:03.957398   38216 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 11:14:04.161433   38216 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 11:14:04.233222   38216 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 11:14:04.298100   38216 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1109 11:14:04.309210   38216 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 11:14:04.309299   38216 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 11:14:04.313213   38216 start.go:472] Will wait 60s for crictl version
	I1109 11:14:04.313271   38216 ssh_runner.go:195] Run: sudo crictl version
	I1109 11:14:04.413065   38216 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1109 11:14:04.413159   38216 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 11:14:04.440282   38216 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 11:14:04.492334   38216 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1109 11:14:04.492591   38216 cli_runner.go:164] Run: docker exec -t default-k8s-diff-port-111353 dig +short host.docker.internal
	I1109 11:14:04.605427   38216 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 11:14:04.605532   38216 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 11:14:04.609915   38216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 11:14:04.621623   38216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8444/tcp") 0).HostPort}}'" default-k8s-diff-port-111353
	I1109 11:14:04.680388   38216 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 11:14:04.680481   38216 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 11:14:04.705770   38216 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 11:14:04.705792   38216 docker.go:543] Images already preloaded, skipping extraction
	I1109 11:14:04.705901   38216 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 11:14:04.730772   38216 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 11:14:04.730798   38216 cache_images.go:84] Images are preloaded, skipping loading
	I1109 11:14:04.730891   38216 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 11:14:04.804184   38216 cni.go:95] Creating CNI manager for ""
	I1109 11:14:04.804202   38216 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:14:04.804220   38216 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1109 11:14:04.804236   38216 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-111353 NodeName:default-k8s-diff-port-111353 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca
.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 11:14:04.804368   38216 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-111353"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 11:14:04.804466   38216 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --hostname-override=default-k8s-diff-port-111353 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-111353 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I1109 11:14:04.804543   38216 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1109 11:14:04.813674   38216 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 11:14:04.813758   38216 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 11:14:04.821426   38216 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (490 bytes)
	I1109 11:14:04.835269   38216 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 11:14:04.849639   38216 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2050 bytes)
	I1109 11:14:04.864310   38216 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 11:14:04.869523   38216 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 11:14:04.880738   38216 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353 for IP: 192.168.76.2
	I1109 11:14:04.880882   38216 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 11:14:04.880971   38216 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 11:14:04.881030   38216 certs.go:302] generating minikube-user signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.key
	I1109 11:14:04.881048   38216 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt with IP's: []
	I1109 11:14:05.029628   38216 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt ...
	I1109 11:14:05.029645   38216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: {Name:mkcac14661b5eefa8f178be164cc0aa371fc7d51 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:14:05.029994   38216 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.key ...
	I1109 11:14:05.030002   38216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.key: {Name:mkc71e248d64cf72e7a14a9848865f2ead004bc5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:14:05.030224   38216 certs.go:302] generating minikube signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.key.31bdca25
	I1109 11:14:05.030244   38216 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1109 11:14:05.302588   38216 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.crt.31bdca25 ...
	I1109 11:14:05.302606   38216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.crt.31bdca25: {Name:mk6ca735f8f9af06562caf590fecc557f0f530d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:14:05.302899   38216 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.key.31bdca25 ...
	I1109 11:14:05.302907   38216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.key.31bdca25: {Name:mk45a5495a1e18cd0b5f42ac1e97393b42693115 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:14:05.303110   38216 certs.go:320] copying /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.crt.31bdca25 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.crt
	I1109 11:14:05.303287   38216 certs.go:324] copying /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.key.31bdca25 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.key
	I1109 11:14:05.303488   38216 certs.go:302] generating aggregator signed cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/proxy-client.key
	I1109 11:14:05.303506   38216 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/proxy-client.crt with IP's: []
	I1109 11:14:05.378197   38216 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/proxy-client.crt ...
	I1109 11:14:05.378207   38216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/proxy-client.crt: {Name:mkad3c7f9fca904cf2976aef65c3d4422e96bad8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:14:05.378423   38216 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/proxy-client.key ...
	I1109 11:14:05.378435   38216 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/proxy-client.key: {Name:mkd47ac0a72eccdda5b1dbb2a0b0a01e0b55d141 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:14:05.378846   38216 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 11:14:05.378895   38216 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 11:14:05.378908   38216 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 11:14:05.378943   38216 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 11:14:05.378978   38216 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 11:14:05.379013   38216 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 11:14:05.379086   38216 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 11:14:05.379602   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 11:14:05.397925   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1109 11:14:05.415241   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 11:14:05.431982   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1109 11:14:05.448548   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 11:14:05.465245   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 11:14:05.482143   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 11:14:05.499046   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 11:14:05.515740   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 11:14:05.532843   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 11:14:05.549662   38216 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 11:14:05.566487   38216 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 11:14:05.580434   38216 ssh_runner.go:195] Run: openssl version
	I1109 11:14:05.586019   38216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 11:14:05.594180   38216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 11:14:05.598402   38216 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 11:14:05.598472   38216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 11:14:05.605423   38216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 11:14:05.614026   38216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 11:14:05.622520   38216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:14:05.626959   38216 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:14:05.627036   38216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:14:05.632569   38216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 11:14:05.640657   38216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 11:14:05.648852   38216 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 11:14:05.653068   38216 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 11:14:05.653140   38216 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 11:14:05.658768   38216 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 11:14:05.666757   38216 kubeadm.go:396] StartCluster: {Name:default-k8s-diff-port-111353 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:default-k8s-diff-port-111353 Namespace:default APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:14:05.666886   38216 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 11:14:05.689461   38216 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 11:14:05.696916   38216 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 11:14:05.703994   38216 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver
	I1109 11:14:05.704047   38216 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 11:14:05.711331   38216 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1109 11:14:05.711356   38216 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1109 11:14:05.756659   38216 kubeadm.go:317] [init] Using Kubernetes version: v1.25.3
	I1109 11:14:05.756700   38216 kubeadm.go:317] [preflight] Running pre-flight checks
	I1109 11:14:05.854195   38216 kubeadm.go:317] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1109 11:14:05.854268   38216 kubeadm.go:317] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1109 11:14:05.854355   38216 kubeadm.go:317] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1109 11:14:05.973469   38216 kubeadm.go:317] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1109 11:14:04.617289   37204 kubeadm.go:317] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1109 11:14:04.617457   37204 kubeadm.go:317] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	I1109 11:14:04.617480   37204 kubeadm.go:317] 
	I1109 11:14:04.617509   37204 kubeadm.go:317] Unfortunately, an error has occurred:
	I1109 11:14:04.617550   37204 kubeadm.go:317] 	timed out waiting for the condition
	I1109 11:14:04.617558   37204 kubeadm.go:317] 
	I1109 11:14:04.617619   37204 kubeadm.go:317] This error is likely caused by:
	I1109 11:14:04.617649   37204 kubeadm.go:317] 	- The kubelet is not running
	I1109 11:14:04.617733   37204 kubeadm.go:317] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1109 11:14:04.617744   37204 kubeadm.go:317] 
	I1109 11:14:04.617819   37204 kubeadm.go:317] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1109 11:14:04.617846   37204 kubeadm.go:317] 	- 'systemctl status kubelet'
	I1109 11:14:04.617868   37204 kubeadm.go:317] 	- 'journalctl -xeu kubelet'
	I1109 11:14:04.617874   37204 kubeadm.go:317] 
	I1109 11:14:04.617957   37204 kubeadm.go:317] Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1109 11:14:04.618035   37204 kubeadm.go:317] To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	I1109 11:14:04.618102   37204 kubeadm.go:317] Here is one example how you may list all Kubernetes containers running in docker:
	I1109 11:14:04.618139   37204 kubeadm.go:317] 	- 'docker ps -a | grep kube | grep -v pause'
	I1109 11:14:04.618191   37204 kubeadm.go:317] 	Once you have found the failing container, you can inspect its logs with:
	I1109 11:14:04.618215   37204 kubeadm.go:317] 	- 'docker logs CONTAINERID'
	I1109 11:14:04.620908   37204 kubeadm.go:317] 	[WARNING Swap]: running with swap on is not supported. Please disable swap
	I1109 11:14:04.621018   37204 kubeadm.go:317] 	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
	I1109 11:14:04.621107   37204 kubeadm.go:317] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1109 11:14:04.621192   37204 kubeadm.go:317] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1109 11:14:04.621253   37204 kubeadm.go:317] To see the stack trace of this error execute with --v=5 or higher
	I1109 11:14:04.621273   37204 kubeadm.go:398] StartCluster complete in 7m58.02267343s
	I1109 11:14:04.621369   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1109 11:14:04.645035   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.645047   37204 logs.go:276] No container was found matching "kube-apiserver"
	I1109 11:14:04.645132   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1109 11:14:04.667780   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.667793   37204 logs.go:276] No container was found matching "etcd"
	I1109 11:14:04.667880   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1109 11:14:04.692026   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.692041   37204 logs.go:276] No container was found matching "coredns"
	I1109 11:14:04.692132   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1109 11:14:04.714003   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.714016   37204 logs.go:276] No container was found matching "kube-scheduler"
	I1109 11:14:04.714101   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1109 11:14:04.739118   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.739132   37204 logs.go:276] No container was found matching "kube-proxy"
	I1109 11:14:04.739217   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1109 11:14:04.762684   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.762695   37204 logs.go:276] No container was found matching "kubernetes-dashboard"
	I1109 11:14:04.762781   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1109 11:14:04.786482   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.786493   37204 logs.go:276] No container was found matching "storage-provisioner"
	I1109 11:14:04.786587   37204 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1109 11:14:04.810343   37204 logs.go:274] 0 containers: []
	W1109 11:14:04.810354   37204 logs.go:276] No container was found matching "kube-controller-manager"
	I1109 11:14:04.810360   37204 logs.go:123] Gathering logs for describe nodes ...
	I1109 11:14:04.810367   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1109 11:14:04.875222   37204 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1109 11:14:04.875232   37204 logs.go:123] Gathering logs for Docker ...
	I1109 11:14:04.875239   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -n 400"
	I1109 11:14:04.890599   37204 logs.go:123] Gathering logs for container status ...
	I1109 11:14:04.890615   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1109 11:14:06.939039   37204 ssh_runner.go:235] Completed: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": (2.048428933s)
	I1109 11:14:06.939196   37204 logs.go:123] Gathering logs for kubelet ...
	I1109 11:14:06.939204   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1109 11:14:06.979307   37204 logs.go:123] Gathering logs for dmesg ...
	I1109 11:14:06.979320   37204 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1109 11:14:06.991014   37204 out.go:369] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1109 11:14:06.991038   37204 out.go:239] * 
	W1109 11:14:06.991161   37204 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 11:14:06.991175   37204 out.go:239] * 
	W1109 11:14:06.991800   37204 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1109 11:14:07.061894   37204 out.go:177] 
	W1109 11:14:07.103353   37204 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.16.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Activating the kubelet service
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get http://localhost:10248/healthz: dial tcp 127.0.0.1:10248: connect: connection refused.
	
	Unfortunately, an error has occurred:
		timed out waiting for the condition
	
	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'
	
	stderr:
		[WARNING Swap]: running with swap on is not supported. Please disable swap
		[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.20. Latest validated version: 18.09
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1109 11:14:07.103414   37204 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1109 11:14:07.103447   37204 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1109 11:14:07.145336   37204 out.go:177] 
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-11-09 19:06:03 UTC, end at Wed 2022-11-09 19:14:08 UTC. --
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Stopping Docker Application Container Engine...
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[132]: time="2022-11-09T19:06:05.637484207Z" level=info msg="Processing signal 'terminated'"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[132]: time="2022-11-09T19:06:05.638447358Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[132]: time="2022-11-09T19:06:05.638987991Z" level=info msg="Daemon shutdown complete"
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: docker.service: Succeeded.
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Stopped Docker Application Container Engine.
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Starting Docker Application Container Engine...
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.689235014Z" level=info msg="Starting up"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690811504Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690844834Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690861011Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690872322Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691897097Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691925520Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691937393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691943086Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.695241631Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.699296631Z" level=info msg="Loading containers: start."
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.776641771Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.808607293Z" level=info msg="Loading containers: done."
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.816483948Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.816574036Z" level=info msg="Daemon has completed initialization"
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Started Docker Application Container Engine.
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.837400763Z" level=info msg="API listen on [::]:2376"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.842877088Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	time="2022-11-09T19:14:10Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:14:11 up  4:13,  0 users,  load average: 0.57, 0.75, 1.10
	Linux old-k8s-version-110019 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-11-09 19:06:03 UTC, end at Wed 2022-11-09 19:14:11 UTC. --
	Nov 09 19:14:09 old-k8s-version-110019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 09 19:14:10 old-k8s-version-110019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 161.
	Nov 09 19:14:10 old-k8s-version-110019 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 09 19:14:10 old-k8s-version-110019 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14478]: I1109 19:14:10.161213   14478 server.go:410] Version: v1.16.0
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14478]: I1109 19:14:10.161496   14478 plugins.go:100] No cloud provider specified.
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14478]: I1109 19:14:10.161511   14478 server.go:773] Client rotation is on, will bootstrap in background
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14478]: I1109 19:14:10.163597   14478 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14478]: W1109 19:14:10.164395   14478 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14478]: W1109 19:14:10.164463   14478 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14478]: F1109 19:14:10.164490   14478 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 09 19:14:10 old-k8s-version-110019 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 09 19:14:10 old-k8s-version-110019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 09 19:14:10 old-k8s-version-110019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 162.
	Nov 09 19:14:10 old-k8s-version-110019 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 09 19:14:10 old-k8s-version-110019 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14500]: I1109 19:14:10.910487   14500 server.go:410] Version: v1.16.0
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14500]: I1109 19:14:10.910650   14500 plugins.go:100] No cloud provider specified.
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14500]: I1109 19:14:10.910660   14500 server.go:773] Client rotation is on, will bootstrap in background
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14500]: I1109 19:14:10.912272   14500 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14500]: W1109 19:14:10.912934   14500 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14500]: W1109 19:14:10.913002   14500 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 09 19:14:10 old-k8s-version-110019 kubelet[14500]: F1109 19:14:10.913039   14500 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 09 19:14:10 old-k8s-version-110019 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 09 19:14:10 old-k8s-version-110019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 11:14:10.915884   38343 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-110019 -n old-k8s-version-110019
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 2 (393.688724ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-110019" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (489.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1109 11:14:14.261185   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:14:15.183062   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:14:20.343693   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:14:57.883015   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:15:28.031983   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:15:43.390777   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:15:56.560577   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:16:12.646988   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:16:31.330089   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:16:33.787580   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:16:45.308750   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 11:16:51.161209   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:16:59.021785   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:17:19.623538   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 11:17:22.530707   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:17:56.837445   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:18:05.125722   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 11:18:08.784324   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:19:14.258452   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:19:20.340721   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:19:31.838718   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:19:57.879993   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:20:02.073899   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:20:37.309873   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:20:56.558631   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:21:12.642499   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:21:20.934345   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:21:31.326536   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:21:33.786710   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:22:22.528025   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:23:08.369578   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 11:23:08.782498   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-110019 -n old-k8s-version-110019
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 2 (393.218318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-110019" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-110019
helpers_test.go:235: (dbg) docker inspect old-k8s-version-110019:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961",
	        "Created": "2022-11-09T19:00:25.764137036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280606,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T19:06:03.12299926Z",
	            "FinishedAt": "2022-11-09T19:06:00.201300966Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hostname",
	        "HostsPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hosts",
	        "LogPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961-json.log",
	        "Name": "/old-k8s-version-110019",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-110019:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-110019",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-110019",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-110019/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-110019",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "045b0c7f7b825a9537dd5e9af2fae17397ce41b99df276c464f08a1c8dd05584",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65164"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65166"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/045b0c7f7b82",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-110019": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "179e62f50506",
	                        "old-k8s-version-110019"
	                    ],
	                    "NetworkID": "70a1b44058ab5d3fa2f8c48ca78ea76e689efbb2630885d7458319462051756b",
	                    "EndpointID": "306574ce40dd95945d1d7c9e7051a4cf459e90d59c42b7a6c013c989c23ad2d6",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 2 (392.217928ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-110019 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-110019 logs -n 25: (3.399216547s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-110722                                      | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-110722                                      | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-110722                                      | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	| delete  | -p embed-certs-110722                                      | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	| delete  | -p                                                         | disable-driver-mounts-111353 | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	|         | disable-driver-mounts-111353                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:14 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:14 PST | 09 Nov 22 11:14 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:14 PST | 09 Nov 22 11:15 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-111353           | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:15 PST | 09 Nov 22 11:15 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:15 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	| start   | -p newest-cni-112024 --memory=2200 --alsologtostderr       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:21 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-112024                 | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-112024                                       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-112024                      | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-112024 --memory=2200 --alsologtostderr       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-112024 sudo                                  | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-112024                                       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-112024                                       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-112024                                       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	| delete  | -p newest-cni-112024                                       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/09 11:21:19
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 11:21:19.250492   39160 out.go:296] Setting OutFile to fd 1 ...
	I1109 11:21:19.250701   39160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 11:21:19.250707   39160 out.go:309] Setting ErrFile to fd 2...
	I1109 11:21:19.250710   39160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 11:21:19.250816   39160 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 11:21:19.251308   39160 out.go:303] Setting JSON to false
	I1109 11:21:19.271409   39160 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":15654,"bootTime":1668006025,"procs":384,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 11:21:19.271498   39160 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 11:21:19.291644   39160 out.go:177] * [newest-cni-112024] minikube v1.28.0 on Darwin 13.0
	I1109 11:21:19.333544   39160 notify.go:220] Checking for updates...
	I1109 11:21:19.333563   39160 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 11:21:19.354374   39160 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 11:21:19.375235   39160 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 11:21:19.396495   39160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 11:21:19.417472   39160 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 11:21:19.439217   39160 config.go:180] Loaded profile config "newest-cni-112024": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 11:21:19.439890   39160 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 11:21:19.502058   39160 docker.go:137] docker version: linux-20.10.20
	I1109 11:21:19.502215   39160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 11:21:19.643318   39160 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 19:21:19.554950168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 11:21:19.665289   39160 out.go:177] * Using the docker driver based on existing profile
	I1109 11:21:19.686114   39160 start.go:282] selected driver: docker
	I1109 11:21:19.686140   39160 start.go:808] validating driver "docker" against &{Name:newest-cni-112024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-112024 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:21:19.686284   39160 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 11:21:19.690131   39160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 11:21:19.829207   39160 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 19:21:19.743036471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 11:21:19.829378   39160 start_flags.go:920] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 11:21:19.829399   39160 cni.go:95] Creating CNI manager for ""
	I1109 11:21:19.829410   39160 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:21:19.829420   39160 start_flags.go:317] config:
	{Name:newest-cni-112024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-112024 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:21:19.872114   39160 out.go:177] * Starting control plane node newest-cni-112024 in cluster newest-cni-112024
	I1109 11:21:19.894971   39160 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 11:21:19.916980   39160 out.go:177] * Pulling base image ...
	I1109 11:21:19.958921   39160 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 11:21:19.958936   39160 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 11:21:19.959032   39160 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1109 11:21:19.959058   39160 cache.go:57] Caching tarball of preloaded images
	I1109 11:21:19.959303   39160 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 11:21:19.959321   39160 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1109 11:21:19.960399   39160 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/config.json ...
	I1109 11:21:20.015567   39160 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 11:21:20.015584   39160 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 11:21:20.015594   39160 cache.go:208] Successfully downloaded all kic artifacts
	I1109 11:21:20.015664   39160 start.go:364] acquiring machines lock for newest-cni-112024: {Name:mkb3d9b076019ff717d4a8d41bcef73f8245d61e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 11:21:20.015762   39160 start.go:368] acquired machines lock for "newest-cni-112024" in 75.201µs
	I1109 11:21:20.015789   39160 start.go:96] Skipping create...Using existing machine configuration
	I1109 11:21:20.015800   39160 fix.go:55] fixHost starting: 
	I1109 11:21:20.016072   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:20.073071   39160 fix.go:103] recreateIfNeeded on newest-cni-112024: state=Stopped err=<nil>
	W1109 11:21:20.073102   39160 fix.go:129] unexpected machine state, will restart: <nil>
	I1109 11:21:20.094703   39160 out.go:177] * Restarting existing docker container for "newest-cni-112024" ...
	I1109 11:21:20.115674   39160 cli_runner.go:164] Run: docker start newest-cni-112024
	I1109 11:21:20.441087   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:20.499587   39160 kic.go:415] container "newest-cni-112024" state is running.
	I1109 11:21:20.500177   39160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-112024
	I1109 11:21:20.559949   39160 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/config.json ...
	I1109 11:21:20.560442   39160 machine.go:88] provisioning docker machine ...
	I1109 11:21:20.560468   39160 ubuntu.go:169] provisioning hostname "newest-cni-112024"
	I1109 11:21:20.560590   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:20.622097   39160 main.go:134] libmachine: Using SSH client type: native
	I1109 11:21:20.622310   39160 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 49707 <nil> <nil>}
	I1109 11:21:20.622325   39160 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-112024 && echo "newest-cni-112024" | sudo tee /etc/hostname
	I1109 11:21:20.754124   39160 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-112024
	
	I1109 11:21:20.754256   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:20.814598   39160 main.go:134] libmachine: Using SSH client type: native
	I1109 11:21:20.814766   39160 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 49707 <nil> <nil>}
	I1109 11:21:20.814781   39160 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-112024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-112024/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-112024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 11:21:20.931522   39160 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 11:21:20.931540   39160 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 11:21:20.931564   39160 ubuntu.go:177] setting up certificates
	I1109 11:21:20.931572   39160 provision.go:83] configureAuth start
	I1109 11:21:20.931661   39160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-112024
	I1109 11:21:20.990243   39160 provision.go:138] copyHostCerts
	I1109 11:21:20.990343   39160 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 11:21:20.990355   39160 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 11:21:20.990455   39160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 11:21:20.990675   39160 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 11:21:20.990683   39160 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 11:21:20.990753   39160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 11:21:20.990908   39160 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 11:21:20.990914   39160 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 11:21:20.990979   39160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 11:21:20.991118   39160 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.newest-cni-112024 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-112024]
	I1109 11:21:21.083685   39160 provision.go:172] copyRemoteCerts
	I1109 11:21:21.083755   39160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 11:21:21.083826   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:21.141889   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:21.231846   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1109 11:21:21.249070   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 11:21:21.266570   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 11:21:21.283949   39160 provision.go:86] duration metric: configureAuth took 352.36787ms
	I1109 11:21:21.283961   39160 ubuntu.go:193] setting minikube options for container-runtime
	I1109 11:21:21.284133   39160 config.go:180] Loaded profile config "newest-cni-112024": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 11:21:21.284214   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:21.341090   39160 main.go:134] libmachine: Using SSH client type: native
	I1109 11:21:21.341244   39160 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 49707 <nil> <nil>}
	I1109 11:21:21.341255   39160 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 11:21:21.458534   39160 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 11:21:21.458562   39160 ubuntu.go:71] root file system type: overlay
	I1109 11:21:21.458766   39160 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 11:21:21.458874   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:21.515420   39160 main.go:134] libmachine: Using SSH client type: native
	I1109 11:21:21.515593   39160 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 49707 <nil> <nil>}
	I1109 11:21:21.515647   39160 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 11:21:21.642537   39160 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 11:21:21.642666   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:21.699425   39160 main.go:134] libmachine: Using SSH client type: native
	I1109 11:21:21.699575   39160 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 49707 <nil> <nil>}
	I1109 11:21:21.699588   39160 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 11:21:21.821380   39160 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 11:21:21.821394   39160 machine.go:91] provisioned docker machine in 1.26095514s
	I1109 11:21:21.821404   39160 start.go:300] post-start starting for "newest-cni-112024" (driver="docker")
	I1109 11:21:21.821411   39160 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 11:21:21.821489   39160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 11:21:21.821554   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:21.879098   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:21.966075   39160 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 11:21:21.969926   39160 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 11:21:21.969941   39160 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 11:21:21.969949   39160 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 11:21:21.969953   39160 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 11:21:21.969969   39160 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 11:21:21.970058   39160 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 11:21:21.970225   39160 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 11:21:21.970407   39160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 11:21:21.977559   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 11:21:21.994896   39160 start.go:303] post-start completed in 173.48274ms
	I1109 11:21:21.994979   39160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 11:21:21.995052   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:22.052090   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:22.137116   39160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 11:21:22.141699   39160 fix.go:57] fixHost completed within 2.125919191s
	I1109 11:21:22.141712   39160 start.go:83] releasing machines lock for "newest-cni-112024", held for 2.125962483s
	I1109 11:21:22.141812   39160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-112024
	I1109 11:21:22.198522   39160 ssh_runner.go:195] Run: systemctl --version
	I1109 11:21:22.198526   39160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 11:21:22.198594   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:22.198609   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:22.258322   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:22.258508   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:22.397725   39160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1109 11:21:22.405310   39160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1109 11:21:22.417548   39160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 11:21:22.487062   39160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1109 11:21:22.562595   39160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 11:21:22.572765   39160 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 11:21:22.572846   39160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 11:21:22.582010   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 11:21:22.594586   39160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 11:21:22.661141   39160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 11:21:22.723833   39160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 11:21:22.793728   39160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 11:21:23.032910   39160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 11:21:23.091777   39160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 11:21:23.167053   39160 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1109 11:21:23.176124   39160 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 11:21:23.176209   39160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 11:21:23.180105   39160 start.go:472] Will wait 60s for crictl version
	I1109 11:21:23.180162   39160 ssh_runner.go:195] Run: sudo crictl version
	I1109 11:21:23.209364   39160 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1109 11:21:23.209453   39160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 11:21:23.238328   39160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 11:21:23.320205   39160 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1109 11:21:23.320437   39160 cli_runner.go:164] Run: docker exec -t newest-cni-112024 dig +short host.docker.internal
	I1109 11:21:23.436903   39160 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 11:21:23.437023   39160 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 11:21:23.441174   39160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 11:21:23.450768   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:23.529730   39160 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I1109 11:21:23.551487   39160 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 11:21:23.551679   39160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 11:21:23.576825   39160 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 11:21:23.576843   39160 docker.go:543] Images already preloaded, skipping extraction
	I1109 11:21:23.576946   39160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 11:21:23.600374   39160 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 11:21:23.600396   39160 cache_images.go:84] Images are preloaded, skipping loading
	I1109 11:21:23.600496   39160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 11:21:23.668141   39160 cni.go:95] Creating CNI manager for ""
	I1109 11:21:23.668156   39160 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:21:23.668171   39160 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I1109 11:21:23.668190   39160 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-112024 NodeName:newest-cni-112024 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArg
s:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 11:21:23.668320   39160 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-112024"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 11:21:23.668412   39160 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-112024 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-112024 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 11:21:23.668484   39160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1109 11:21:23.675916   39160 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 11:21:23.675991   39160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 11:21:23.682803   39160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (516 bytes)
	I1109 11:21:23.695359   39160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 11:21:23.707382   39160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1109 11:21:23.719747   39160 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 11:21:23.723250   39160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 11:21:23.732456   39160 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024 for IP: 192.168.76.2
	I1109 11:21:23.732579   39160 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 11:21:23.732638   39160 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 11:21:23.732729   39160 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/client.key
	I1109 11:21:23.732789   39160 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/apiserver.key.31bdca25
	I1109 11:21:23.732867   39160 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/proxy-client.key
	I1109 11:21:23.733124   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 11:21:23.733166   39160 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 11:21:23.733178   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 11:21:23.733212   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 11:21:23.733250   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 11:21:23.733282   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 11:21:23.733362   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 11:21:23.733920   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 11:21:23.750892   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 11:21:23.768167   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 11:21:23.785946   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 11:21:23.804780   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 11:21:23.823985   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 11:21:23.840615   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 11:21:23.859237   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 11:21:23.875884   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 11:21:23.893260   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 11:21:23.910190   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 11:21:23.926752   39160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 11:21:23.939587   39160 ssh_runner.go:195] Run: openssl version
	I1109 11:21:23.944934   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 11:21:23.952917   39160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 11:21:23.957001   39160 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 11:21:23.957054   39160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 11:21:23.962494   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 11:21:23.969644   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 11:21:23.977860   39160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 11:21:23.981699   39160 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 11:21:23.981752   39160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 11:21:23.987127   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 11:21:23.994374   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 11:21:24.002343   39160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:21:24.006209   39160 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:21:24.006266   39160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:21:24.011403   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 11:21:24.018420   39160 kubeadm.go:396] StartCluster: {Name:newest-cni-112024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-112024 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:21:24.018543   39160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 11:21:24.041143   39160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 11:21:24.048733   39160 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1109 11:21:24.048750   39160 kubeadm.go:627] restartCluster start
	I1109 11:21:24.048809   39160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 11:21:24.055552   39160 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:24.055646   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:24.113955   39160 kubeconfig.go:135] verify returned: extract IP: "newest-cni-112024" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 11:21:24.114135   39160 kubeconfig.go:146] "newest-cni-112024" context is missing from /Users/jenkins/minikube-integration/15331-22028/kubeconfig - will repair!
	I1109 11:21:24.114459   39160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/kubeconfig: {Name:mk02bb1c68cad934afd737965b2dbda8f5a4ba2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:21:24.115814   39160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 11:21:24.123333   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:24.123386   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:24.131539   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:24.333659   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:24.333851   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:24.344340   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:24.531707   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:24.531794   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:24.541194   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:24.733703   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:24.733917   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:24.744705   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:24.933657   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:24.933853   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:24.944220   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:25.133656   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:25.133822   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:25.144493   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:25.332287   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:25.332399   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:25.341232   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:25.532871   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:25.532995   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:25.543611   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:25.733749   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:25.733880   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:25.745212   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:25.932416   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:25.932543   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:25.942796   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:26.133697   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:26.133885   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:26.144623   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:26.333706   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:26.333909   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:26.345355   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:26.533690   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:26.533903   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:26.545240   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:26.733654   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:26.733834   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:26.744812   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:26.932078   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:26.932214   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:26.942869   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:27.131916   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:27.132050   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:27.143076   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:27.143086   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:27.143146   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:27.151214   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:27.151226   39160 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1109 11:21:27.151234   39160 kubeadm.go:1114] stopping kube-system containers ...
	I1109 11:21:27.151313   39160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 11:21:27.175295   39160 docker.go:444] Stopping containers: [69ad812ba421 a1ffed4fd8c7 034bc5adb025 ec85c77e6fe3 ff5b37456038 05384ace7dfb 662a3d22b99f 8e643aa63efa bb2c6ce3933d e4fa0ccc8dd0 f1b0990aaac6 d6eac9e51a3c 4f5537f577af 5d9de125dd0d 9d2f4a7ccb70 e356058b0875]
	I1109 11:21:27.175407   39160 ssh_runner.go:195] Run: docker stop 69ad812ba421 a1ffed4fd8c7 034bc5adb025 ec85c77e6fe3 ff5b37456038 05384ace7dfb 662a3d22b99f 8e643aa63efa bb2c6ce3933d e4fa0ccc8dd0 f1b0990aaac6 d6eac9e51a3c 4f5537f577af 5d9de125dd0d 9d2f4a7ccb70 e356058b0875
	I1109 11:21:27.198801   39160 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1109 11:21:27.208997   39160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 11:21:27.216555   39160 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  9 19:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  9 19:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Nov  9 19:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  9 19:20 /etc/kubernetes/scheduler.conf
	
	I1109 11:21:27.216616   39160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 11:21:27.223941   39160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 11:21:27.231428   39160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 11:21:27.238956   39160 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:27.239014   39160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 11:21:27.246096   39160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 11:21:27.253303   39160 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:27.253372   39160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 11:21:27.260298   39160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 11:21:27.267773   39160 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1109 11:21:27.267786   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:27.317803   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:27.812620   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:27.940417   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:27.992684   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:28.083541   39160 api_server.go:51] waiting for apiserver process to appear ...
	I1109 11:21:28.083619   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:21:28.595563   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:21:29.094246   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:21:29.160583   39160 api_server.go:71] duration metric: took 1.077053042s to wait for apiserver process to appear ...
	I1109 11:21:29.160604   39160 api_server.go:87] waiting for apiserver healthz status ...
	I1109 11:21:29.160624   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:29.161900   39160 api_server.go:268] stopped: https://127.0.0.1:49706/healthz: Get "https://127.0.0.1:49706/healthz": EOF
	I1109 11:21:29.663027   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:31.868827   39160 api_server.go:278] https://127.0.0.1:49706/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 11:21:31.868846   39160 api_server.go:102] status: https://127.0.0.1:49706/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 11:21:32.163604   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:32.170966   39160 api_server.go:278] https://127.0.0.1:49706/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 11:21:32.170993   39160 api_server.go:102] status: https://127.0.0.1:49706/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 11:21:32.662313   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:32.668140   39160 api_server.go:278] https://127.0.0.1:49706/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 11:21:32.668156   39160 api_server.go:102] status: https://127.0.0.1:49706/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 11:21:33.162003   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:33.168204   39160 api_server.go:278] https://127.0.0.1:49706/healthz returned 200:
	ok
	I1109 11:21:33.175497   39160 api_server.go:140] control plane version: v1.25.3
	I1109 11:21:33.175514   39160 api_server.go:130] duration metric: took 4.0149409s to wait for apiserver health ...
	I1109 11:21:33.175521   39160 cni.go:95] Creating CNI manager for ""
	I1109 11:21:33.175530   39160 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:21:33.175543   39160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 11:21:33.183186   39160 system_pods.go:59] 8 kube-system pods found
	I1109 11:21:33.183203   39160 system_pods.go:61] "coredns-565d847f94-c62vb" [587a714d-b418-44bc-9040-50008d1ddd27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 11:21:33.183209   39160 system_pods.go:61] "etcd-newest-cni-112024" [cf6064fa-e5b6-40b3-bd80-65c64b4947ea] Running
	I1109 11:21:33.183213   39160 system_pods.go:61] "kube-apiserver-newest-cni-112024" [f15ac211-a183-47c9-9190-aa7c5ef9d845] Running
	I1109 11:21:33.183217   39160 system_pods.go:61] "kube-controller-manager-newest-cni-112024" [677a9613-cd21-4af4-a753-94a2414d2d82] Running
	I1109 11:21:33.183222   39160 system_pods.go:61] "kube-proxy-n9s2b" [1fcf5391-a216-431b-9b90-42578a36915a] Running
	I1109 11:21:33.183228   39160 system_pods.go:61] "kube-scheduler-newest-cni-112024" [7c0ddce7-09ba-4073-8bf9-885064d664a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 11:21:33.183235   39160 system_pods.go:61] "metrics-server-5c8fd5cf8-swf96" [668082b5-81b6-4c62-be89-56ddf1564689] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 11:21:33.183239   39160 system_pods.go:61] "storage-provisioner" [f7f00579-9d08-494c-ad8b-2b43d998452e] Running
	I1109 11:21:33.183243   39160 system_pods.go:74] duration metric: took 7.693928ms to wait for pod list to return data ...
	I1109 11:21:33.183250   39160 node_conditions.go:102] verifying NodePressure condition ...
	I1109 11:21:33.186289   39160 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 11:21:33.186304   39160 node_conditions.go:123] node cpu capacity is 6
	I1109 11:21:33.186315   39160 node_conditions.go:105] duration metric: took 3.060066ms to run NodePressure ...
	I1109 11:21:33.186331   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:33.467116   39160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 11:21:33.476211   39160 ops.go:34] apiserver oom_adj: -16
	I1109 11:21:33.476230   39160 kubeadm.go:631] restartCluster took 9.427560742s
	I1109 11:21:33.476245   39160 kubeadm.go:398] StartCluster complete in 9.45791491s
	I1109 11:21:33.476266   39160 settings.go:142] acquiring lock: {Name:mke93232301b59b22d43a378e933baa222d3feda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:21:33.476350   39160 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 11:21:33.478291   39160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/kubeconfig: {Name:mk02bb1c68cad934afd737965b2dbda8f5a4ba2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:21:33.481545   39160 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-112024" rescaled to 1
	I1109 11:21:33.481582   39160 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1109 11:21:33.481597   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 11:21:33.481634   39160 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1109 11:21:33.481850   39160 config.go:180] Loaded profile config "newest-cni-112024": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 11:21:33.526553   39160 out.go:177] * Verifying Kubernetes components...
	I1109 11:21:33.526629   39160 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-112024"
	I1109 11:21:33.526629   39160 addons.go:65] Setting dashboard=true in profile "newest-cni-112024"
	I1109 11:21:33.547675   39160 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-112024"
	I1109 11:21:33.547679   39160 addons.go:227] Setting addon dashboard=true in "newest-cni-112024"
	I1109 11:21:33.526634   39160 addons.go:65] Setting default-storageclass=true in profile "newest-cni-112024"
	W1109 11:21:33.547687   39160 addons.go:236] addon dashboard should already be in state true
	I1109 11:21:33.547707   39160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 11:21:33.526640   39160 addons.go:65] Setting metrics-server=true in profile "newest-cni-112024"
	I1109 11:21:33.547718   39160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-112024"
	I1109 11:21:33.547727   39160 addons.go:227] Setting addon metrics-server=true in "newest-cni-112024"
	W1109 11:21:33.547733   39160 addons.go:236] addon metrics-server should already be in state true
	W1109 11:21:33.547687   39160 addons.go:236] addon storage-provisioner should already be in state true
	I1109 11:21:33.547751   39160 host.go:66] Checking if "newest-cni-112024" exists ...
	I1109 11:21:33.547767   39160 host.go:66] Checking if "newest-cni-112024" exists ...
	I1109 11:21:33.547794   39160 host.go:66] Checking if "newest-cni-112024" exists ...
	I1109 11:21:33.548040   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:33.548103   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:33.548150   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:33.548177   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:33.678554   39160 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1109 11:21:33.656499   39160 addons.go:227] Setting addon default-storageclass=true in "newest-cni-112024"
	I1109 11:21:33.675486   39160 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1109 11:21:33.675541   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:33.714500   39160 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1109 11:21:33.735893   39160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 11:21:33.772708   39160 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1109 11:21:33.772721   39160 addons.go:236] addon default-storageclass should already be in state true
	I1109 11:21:33.772722   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1109 11:21:33.809800   39160 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 11:21:33.830710   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 11:21:33.830722   39160 host.go:66] Checking if "newest-cni-112024" exists ...
	I1109 11:21:33.830865   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:33.867721   39160 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I1109 11:21:33.830878   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:33.831240   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:33.904756   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 11:21:33.904778   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 11:21:33.905475   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:33.919339   39160 api_server.go:51] waiting for apiserver process to appear ...
	I1109 11:21:33.919452   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:21:33.940329   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:33.948627   39160 api_server.go:71] duration metric: took 467.028048ms to wait for apiserver process to appear ...
	I1109 11:21:33.948698   39160 api_server.go:87] waiting for apiserver healthz status ...
	I1109 11:21:33.948713   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:33.959880   39160 api_server.go:278] https://127.0.0.1:49706/healthz returned 200:
	ok
	I1109 11:21:33.961634   39160 api_server.go:140] control plane version: v1.25.3
	I1109 11:21:33.961649   39160 api_server.go:130] duration metric: took 12.941285ms to wait for apiserver health ...
	I1109 11:21:33.961658   39160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 11:21:33.969813   39160 system_pods.go:59] 8 kube-system pods found
	I1109 11:21:33.969850   39160 system_pods.go:61] "coredns-565d847f94-c62vb" [587a714d-b418-44bc-9040-50008d1ddd27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 11:21:33.969860   39160 system_pods.go:61] "etcd-newest-cni-112024" [cf6064fa-e5b6-40b3-bd80-65c64b4947ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 11:21:33.969870   39160 system_pods.go:61] "kube-apiserver-newest-cni-112024" [f15ac211-a183-47c9-9190-aa7c5ef9d845] Running
	I1109 11:21:33.969879   39160 system_pods.go:61] "kube-controller-manager-newest-cni-112024" [677a9613-cd21-4af4-a753-94a2414d2d82] Running
	I1109 11:21:33.969885   39160 system_pods.go:61] "kube-proxy-n9s2b" [1fcf5391-a216-431b-9b90-42578a36915a] Running
	I1109 11:21:33.969904   39160 system_pods.go:61] "kube-scheduler-newest-cni-112024" [7c0ddce7-09ba-4073-8bf9-885064d664a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 11:21:33.969921   39160 system_pods.go:61] "metrics-server-5c8fd5cf8-swf96" [668082b5-81b6-4c62-be89-56ddf1564689] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 11:21:33.969931   39160 system_pods.go:61] "storage-provisioner" [f7f00579-9d08-494c-ad8b-2b43d998452e] Running
	I1109 11:21:33.969939   39160 system_pods.go:74] duration metric: took 8.275476ms to wait for pod list to return data ...
	I1109 11:21:33.969954   39160 default_sa.go:34] waiting for default service account to be created ...
	I1109 11:21:33.973693   39160 default_sa.go:45] found service account: "default"
	I1109 11:21:33.973713   39160 default_sa.go:55] duration metric: took 3.751162ms for default service account to be created ...
	I1109 11:21:33.973725   39160 kubeadm.go:573] duration metric: took 492.129941ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1109 11:21:33.973745   39160 node_conditions.go:102] verifying NodePressure condition ...
	I1109 11:21:33.977462   39160 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 11:21:33.977479   39160 node_conditions.go:123] node cpu capacity is 6
	I1109 11:21:33.977489   39160 node_conditions.go:105] duration metric: took 3.739667ms to run NodePressure ...
	I1109 11:21:33.977501   39160 start.go:217] waiting for startup goroutines ...
	I1109 11:21:33.998152   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:33.999612   39160 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 11:21:33.999629   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 11:21:33.999730   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:34.001886   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:34.068959   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:34.074072   39160 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1109 11:21:34.074084   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I1109 11:21:34.100522   39160 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1109 11:21:34.100539   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1109 11:21:34.154472   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 11:21:34.154484   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 11:21:34.160052   39160 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 11:21:34.160066   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1109 11:21:34.166248   39160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 11:21:34.176599   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 11:21:34.176623   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 11:21:34.177280   39160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 11:21:34.194194   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 11:21:34.194207   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 11:21:34.250400   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 11:21:34.250419   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I1109 11:21:34.253560   39160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 11:21:34.273242   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 11:21:34.288106   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 11:21:34.362206   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 11:21:34.362218   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 11:21:34.473870   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 11:21:34.473887   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 11:21:34.556895   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 11:21:34.556911   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 11:21:34.578415   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 11:21:34.578431   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 11:21:34.596247   39160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 11:21:35.463340   39160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.297079899s)
	I1109 11:21:35.470083   39160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.292789582s)
	I1109 11:21:35.470109   39160 addons.go:457] Verifying addon metrics-server=true in "newest-cni-112024"
	I1109 11:21:35.470131   39160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.182052729s)
	I1109 11:21:35.595388   39160 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-112024 addons enable metrics-server	
	
	
	I1109 11:21:35.637444   39160 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1109 11:21:35.712357   39160 addons.go:488] enableAddons completed in 2.230736857s
	I1109 11:21:35.712756   39160 ssh_runner.go:195] Run: rm -f paused
	I1109 11:21:35.752187   39160 start.go:506] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I1109 11:21:35.773394   39160 out.go:177] * Done! kubectl is now configured to use "newest-cni-112024" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-11-09 19:06:03 UTC, end at Wed 2022-11-09 19:23:43 UTC. --
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Stopping Docker Application Container Engine...
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[132]: time="2022-11-09T19:06:05.637484207Z" level=info msg="Processing signal 'terminated'"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[132]: time="2022-11-09T19:06:05.638447358Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[132]: time="2022-11-09T19:06:05.638987991Z" level=info msg="Daemon shutdown complete"
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: docker.service: Succeeded.
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Stopped Docker Application Container Engine.
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Starting Docker Application Container Engine...
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.689235014Z" level=info msg="Starting up"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690811504Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690844834Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690861011Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690872322Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691897097Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691925520Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691937393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691943086Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.695241631Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.699296631Z" level=info msg="Loading containers: start."
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.776641771Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.808607293Z" level=info msg="Loading containers: done."
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.816483948Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.816574036Z" level=info msg="Daemon has completed initialization"
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Started Docker Application Container Engine.
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.837400763Z" level=info msg="API listen on [::]:2376"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.842877088Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2022-11-09T19:23:45Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:23:45 up  4:23,  0 users,  load average: 0.42, 0.68, 0.89
	Linux old-k8s-version-110019 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-11-09 19:06:03 UTC, end at Wed 2022-11-09 19:23:45 UTC. --
	Nov 09 19:23:44 old-k8s-version-110019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 09 19:23:44 old-k8s-version-110019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 927.
	Nov 09 19:23:44 old-k8s-version-110019 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 09 19:23:44 old-k8s-version-110019 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 09 19:23:44 old-k8s-version-110019 kubelet[24531]: I1109 19:23:44.754663   24531 server.go:410] Version: v1.16.0
	Nov 09 19:23:44 old-k8s-version-110019 kubelet[24531]: I1109 19:23:44.754927   24531 plugins.go:100] No cloud provider specified.
	Nov 09 19:23:44 old-k8s-version-110019 kubelet[24531]: I1109 19:23:44.754941   24531 server.go:773] Client rotation is on, will bootstrap in background
	Nov 09 19:23:44 old-k8s-version-110019 kubelet[24531]: I1109 19:23:44.756875   24531 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 09 19:23:44 old-k8s-version-110019 kubelet[24531]: W1109 19:23:44.757609   24531 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 09 19:23:44 old-k8s-version-110019 kubelet[24531]: W1109 19:23:44.757675   24531 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 09 19:23:44 old-k8s-version-110019 kubelet[24531]: F1109 19:23:44.757699   24531 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 09 19:23:44 old-k8s-version-110019 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 09 19:23:44 old-k8s-version-110019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 09 19:23:45 old-k8s-version-110019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 928.
	Nov 09 19:23:45 old-k8s-version-110019 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 09 19:23:45 old-k8s-version-110019 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 09 19:23:45 old-k8s-version-110019 kubelet[24543]: I1109 19:23:45.478598   24543 server.go:410] Version: v1.16.0
	Nov 09 19:23:45 old-k8s-version-110019 kubelet[24543]: I1109 19:23:45.478827   24543 plugins.go:100] No cloud provider specified.
	Nov 09 19:23:45 old-k8s-version-110019 kubelet[24543]: I1109 19:23:45.478838   24543 server.go:773] Client rotation is on, will bootstrap in background
	Nov 09 19:23:45 old-k8s-version-110019 kubelet[24543]: I1109 19:23:45.480608   24543 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 09 19:23:45 old-k8s-version-110019 kubelet[24543]: W1109 19:23:45.481322   24543 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 09 19:23:45 old-k8s-version-110019 kubelet[24543]: W1109 19:23:45.481386   24543 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 09 19:23:45 old-k8s-version-110019 kubelet[24543]: F1109 19:23:45.481422   24543 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 09 19:23:45 old-k8s-version-110019 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 09 19:23:45 old-k8s-version-110019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 11:23:45.709341   39511 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-110019 -n old-k8s-version-110019
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 2 (396.666213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-110019" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (574.78s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:24:14.255578   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:24:20.338469   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:24:38.339116   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:24:38.345552   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:24:38.357740   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:24:38.379928   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:24:38.422235   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:24:38.503322   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:24:38.663531   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:24:38.985687   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:24:39.627978   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:24:40.909329   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:24:43.471541   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:24:48.592288   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:24:57.879380   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:24:58.834515   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:25:02.071673   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:25:19.316621   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:25:28.028191   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:25:56.634409   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:26:00.358070   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:26:12.722032   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:26:31.406495   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:26:33.864149   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:26:45.382343   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:27:22.279280   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
E1109 11:27:22.606478   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:27:54.459657   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:28:08.862673   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:29:14.334983   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:29:15.781247   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 11:29:20.418636   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:29:38.419210   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:29:57.958603   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1109 11:30:02.152630   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 11:30:06.121957   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/default-k8s-diff-port-111353/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:65166/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1109 11:30:25.656321   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1109 11:30:28.108791   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1109 11:30:56.635649   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1109 11:31:12.720641   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1109 11:31:31.405115   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1109 11:31:33.865518   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1109 11:31:45.383056   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1109 11:32:22.607178   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
E1109 11:32:23.466804   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
helpers_test.go:327: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: timed out waiting for the condition ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-110019 -n old-k8s-version-110019
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 2 (393.836945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-110019" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: timed out waiting for the condition
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-110019 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-110019 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.883µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-110019 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " k8s.gcr.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect old-k8s-version-110019
helpers_test.go:235: (dbg) docker inspect old-k8s-version-110019:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961",
	        "Created": "2022-11-09T19:00:25.764137036Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 280606,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2022-11-09T19:06:03.12299926Z",
	            "FinishedAt": "2022-11-09T19:06:00.201300966Z"
	        },
	        "Image": "sha256:866c1fe4e3f2d2bfd7e546c12f77c7ef1d94d65a891923ff6772712a9f20df40",
	        "ResolvConfPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hostname",
	        "HostsPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/hosts",
	        "LogPath": "/var/lib/docker/containers/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961/179e62f50506851342bc4b05c87d0504aae01b76a29a4d4cfea86e9a42803961-json.log",
	        "Name": "/old-k8s-version-110019",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-110019:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-110019",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 2306867200,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7-init/diff:/var/lib/docker/overlay2/8c1487330bae95024fb04d0a8169f7cc81fd1ba3c27821870f7ac7c3f14eba21/diff:/var/lib/docker/overlay2/bcaf2c5b25be7a7acfb5b663242cc7456d579ea111b07e556bc197c7bfe8eceb/diff:/var/lib/docker/overlay2/0638d8210ce7d8ac0e4379a16e33ec4ba3dad0040bc7b1e6eee9a3ce3b1bab29/diff:/var/lib/docker/overlay2/82d04ede67e6bea7f3cfd2fd8cdf0af23333441d1a311f6c55109e45255a64ad/diff:/var/lib/docker/overlay2/00bbdacd39c41ffbc754eaba2d71640e0fb4097eb9097b8c2a5999bb5a8d4954/diff:/var/lib/docker/overlay2/dcea734b558e644021b8ede0f23c4e46a58e4c344becb334c465fd62b5d48e24/diff:/var/lib/docker/overlay2/ac3602d3dd4e947c3a4676ef8c632089eb73ee68aba964a7d95271ee18eb97f2/diff:/var/lib/docker/overlay2/ac2acc0194de08599857f1b8448ae7b4683ed77f947900bfd694cf26f6c54ffc/diff:/var/lib/docker/overlay2/fdbfaed38c23fa0bd5c54d311629017408fe01fee83151dd3f3d638a7617f4e4/diff:/var/lib/docker/overlay2/d025fd
583df9cfe294d4d46082700b7f5c621b93a796ba7f8f971ddaa60fd83a/diff:/var/lib/docker/overlay2/f4c2a2db4696fc9f1bd6e98e05d393517d2daaeb90f35ae457c61d742e4cc236/diff:/var/lib/docker/overlay2/5ca3c90c302636922d6701cd2547bba3ccd398ec5ade10e04dccd4fe6104a487/diff:/var/lib/docker/overlay2/a5a65589498adaf58375923e30a95f690962a85ecbf6af317b41821b327542b2/diff:/var/lib/docker/overlay2/ff71186ee131d2e64c9cb2be6b53d85bf84ea4a195c417de669d42fe5e10eecd/diff:/var/lib/docker/overlay2/493a221169b45236aaee4b88113fdb3c67c8fbb99e614b4a728d47a8448a3f3f/diff:/var/lib/docker/overlay2/4bafd70e2ae935045921b84746858ec62889e360ddf11495e2a15831b74efc0a/diff:/var/lib/docker/overlay2/90fd6faa0cf3969fb696847bf51d309918860f0cc4599a708e4932647f26c73e/diff:/var/lib/docker/overlay2/ea92881c6586b95c867a9734394d9d100f56f7cbe0812c11395e47b6035c4508/diff:/var/lib/docker/overlay2/ecab8d41ffba5fecbe6e01377fa7b74a9a81ceea0b6ce37ad2373c1bbf89f44a/diff:/var/lib/docker/overlay2/0a01bb2689fa7bca8ea3322bf7e0b9d33392f902c096d5e452da6755180c4a06/diff:/var/lib/d
ocker/overlay2/ab470b7aab8ddccf634d27d72ad09bcf355c2bd4439dcdf67f345220671e4238/diff:/var/lib/docker/overlay2/e7aae4cf5fe266e78947648cb680b6e10a1e6f6527df18d86605a770111ddaa5/diff:/var/lib/docker/overlay2/6dd4c667173ad3322ca465531a62d549cfe66fbb40165818a4e3923e37895eee/diff:/var/lib/docker/overlay2/6053a29c5dc20476b02a6b6d0dafc1d7a81702c6680392177192d709341eabd0/diff:/var/lib/docker/overlay2/80d8ec07feaf3a90ae374a6503523b083045c37de15abf3c2f12d0a21bea84c4/diff:/var/lib/docker/overlay2/55ad8679d9710c334bac8daf6e3b0f9a8fcafc62f44b8f2612bb054ff91aac64/diff:/var/lib/docker/overlay2/64743b589f654fa1e35b0e7be5ff94a3bebfa17c8f1c9811e0d42cdade3f57e7/diff:/var/lib/docker/overlay2/3722e4a69202d28b84adf462e6aa9f065e8079b1c00f372b68d56c9b2c44e658/diff:/var/lib/docker/overlay2/d1ceccb867521773a63007a600d64b8537e1cb227e2d9a6f9df5525e8315b3ef/diff:/var/lib/docker/overlay2/5de0b7762a7bcd971dba6ab8b5ec3a1163b2eb72c904b17e6b0b10dac2ed8cc6/diff:/var/lib/docker/overlay2/36f2255b89964a0e12e3175634bd5c1dfabf520e5a894e260323e26c3a3
83e8c/diff:/var/lib/docker/overlay2/58ca82e7923ce16120ce2bdcabd5d071ca9618a7139cac111d5d271fcb44d6b6/diff:/var/lib/docker/overlay2/c6b28d136c7e3834c9977a2115a7c798e71334d33a76997b156f96642e187677/diff:/var/lib/docker/overlay2/8a75a817735ea5c25b9b75502ba91bba33b5160dab28a17f2f44fa68bd8dcc3f/diff:/var/lib/docker/overlay2/4513fa1cc1e8023f3c0a924e36218c37dfe3595aec46e4d2d96d6c165774b8a3/diff:/var/lib/docker/overlay2/3d3be6ad927b487673f3c43210c9ea9a1acfa4d46cbcb724fce27baf9158b507/diff:/var/lib/docker/overlay2/b8e22ec10062469f680485d2f5f73afce0218c32b25e56188c00547a8152d0c7/diff:/var/lib/docker/overlay2/cb1cb5efbfa387d8fc791f28bdad103d39664ae58a6e372eddc5588db5779427/diff:/var/lib/docker/overlay2/c796de90ee7673fa4d316d056c320ee04f0b6ba574aaa33e4073e3a7200c11a6/diff:/var/lib/docker/overlay2/73c2de759693b5ffd934f7354e3db91ba89c6a5a9c24621fd7c27411bc335c5a/diff:/var/lib/docker/overlay2/46e9fe39b8edeecbe0b31037d24c2994ac3848fbb3af5ed3c47ca2fc1ad0d301/diff:/var/lib/docker/overlay2/febe0fa15a70685bf242a86e91427efdb9b7ec
302a48a7004f89cc569145c7a1/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6175df1d28c3e25a0e5c63a5d7570f40eb9f7f886fa3164897a59ab06ccc7cd7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-110019",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-110019/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-110019",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-110019",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "045b0c7f7b825a9537dd5e9af2fae17397ce41b99df276c464f08a1c8dd05584",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65162"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65163"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65164"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65165"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "0.0.0.0",
	                        "HostPort": "65166"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/045b0c7f7b82",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-110019": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "179e62f50506",
	                        "old-k8s-version-110019"
	                    ],
	                    "NetworkID": "70a1b44058ab5d3fa2f8c48ca78ea76e689efbb2630885d7458319462051756b",
	                    "EndpointID": "306574ce40dd95945d1d7c9e7051a4cf459e90d59c42b7a6c013c989c23ad2d6",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 2 (394.239098ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-110019 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p old-k8s-version-110019 logs -n 25: (3.406217702s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                            Args                            |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| pause   | -p embed-certs-110722                                      | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p embed-certs-110722                                      | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p embed-certs-110722                                      | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	| delete  | -p embed-certs-110722                                      | embed-certs-110722           | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	| delete  | -p                                                         | disable-driver-mounts-111353 | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:13 PST |
	|         | disable-driver-mounts-111353                               |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:13 PST | 09 Nov 22 11:14 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p                                   | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:14 PST | 09 Nov 22 11:14 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:14 PST | 09 Nov 22 11:15 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-111353           | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:15 PST | 09 Nov 22 11:15 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:15 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --memory=2200                                              |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                              |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                      |                              |         |         |                     |                     |
	|         | --driver=docker                                            |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.25.3                               |                              |         |         |                     |                     |
	| ssh     | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | sudo crictl images -o json                                 |                              |         |         |                     |                     |
	| pause   | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	| delete  | -p                                                         | default-k8s-diff-port-111353 | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:20 PST |
	|         | default-k8s-diff-port-111353                               |                              |         |         |                     |                     |
	| start   | -p newest-cni-112024 --memory=2200 --alsologtostderr       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:20 PST | 09 Nov 22 11:21 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-112024                 | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --images=MetricsServer=k8s.gcr.io/echoserver:1.4           |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                     |                              |         |         |                     |                     |
	| stop    | -p newest-cni-112024                                       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --alsologtostderr -v=3                                     |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-112024                      | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                              |         |         |                     |                     |
	| start   | -p newest-cni-112024 --memory=2200 --alsologtostderr       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --wait=apiserver,system_pods,default_sa --feature-gates    |                              |         |         |                     |                     |
	|         | ServerSideApply=true --network-plugin=cni                  |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                              |         |         |                     |                     |
	|         | --driver=docker  --kubernetes-version=v1.25.3              |                              |         |         |                     |                     |
	| ssh     | -p newest-cni-112024 sudo                                  | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | crictl images -o json                                      |                              |         |         |                     |                     |
	| pause   | -p newest-cni-112024                                       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| unpause | -p newest-cni-112024                                       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|         | --alsologtostderr -v=1                                     |                              |         |         |                     |                     |
	| delete  | -p newest-cni-112024                                       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	| delete  | -p newest-cni-112024                                       | newest-cni-112024            | jenkins | v1.28.0 | 09 Nov 22 11:21 PST | 09 Nov 22 11:21 PST |
	|---------|------------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/09 11:21:19
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 11:21:19.250492   39160 out.go:296] Setting OutFile to fd 1 ...
	I1109 11:21:19.250701   39160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 11:21:19.250707   39160 out.go:309] Setting ErrFile to fd 2...
	I1109 11:21:19.250710   39160 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 11:21:19.250816   39160 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 11:21:19.251308   39160 out.go:303] Setting JSON to false
	I1109 11:21:19.271409   39160 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":15654,"bootTime":1668006025,"procs":384,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 11:21:19.271498   39160 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 11:21:19.291644   39160 out.go:177] * [newest-cni-112024] minikube v1.28.0 on Darwin 13.0
	I1109 11:21:19.333544   39160 notify.go:220] Checking for updates...
	I1109 11:21:19.333563   39160 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 11:21:19.354374   39160 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 11:21:19.375235   39160 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 11:21:19.396495   39160 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 11:21:19.417472   39160 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 11:21:19.439217   39160 config.go:180] Loaded profile config "newest-cni-112024": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 11:21:19.439890   39160 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 11:21:19.502058   39160 docker.go:137] docker version: linux-20.10.20
	I1109 11:21:19.502215   39160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 11:21:19.643318   39160 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 19:21:19.554950168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 11:21:19.665289   39160 out.go:177] * Using the docker driver based on existing profile
	I1109 11:21:19.686114   39160 start.go:282] selected driver: docker
	I1109 11:21:19.686140   39160 start.go:808] validating driver "docker" against &{Name:newest-cni-112024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-112024 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:21:19.686284   39160 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 11:21:19.690131   39160 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 11:21:19.829207   39160 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:false NGoroutines:53 SystemTime:2022-11-09 19:21:19.743036471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 11:21:19.829378   39160 start_flags.go:920] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1109 11:21:19.829399   39160 cni.go:95] Creating CNI manager for ""
	I1109 11:21:19.829410   39160 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:21:19.829420   39160 start_flags.go:317] config:
	{Name:newest-cni-112024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-112024 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Networ
kPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Us
ers:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:21:19.872114   39160 out.go:177] * Starting control plane node newest-cni-112024 in cluster newest-cni-112024
	I1109 11:21:19.894971   39160 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 11:21:19.916980   39160 out.go:177] * Pulling base image ...
	I1109 11:21:19.958921   39160 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 11:21:19.958936   39160 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 11:21:19.959032   39160 preload.go:148] Found local preload: /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1109 11:21:19.959058   39160 cache.go:57] Caching tarball of preloaded images
	I1109 11:21:19.959303   39160 preload.go:174] Found /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1109 11:21:19.959321   39160 cache.go:60] Finished verifying existence of preloaded tar for  v1.25.3 on docker
	I1109 11:21:19.960399   39160 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/config.json ...
	I1109 11:21:20.015567   39160 image.go:80] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon, skipping pull
	I1109 11:21:20.015584   39160 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in daemon, skipping load
	I1109 11:21:20.015594   39160 cache.go:208] Successfully downloaded all kic artifacts
	I1109 11:21:20.015664   39160 start.go:364] acquiring machines lock for newest-cni-112024: {Name:mkb3d9b076019ff717d4a8d41bcef73f8245d61e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1109 11:21:20.015762   39160 start.go:368] acquired machines lock for "newest-cni-112024" in 75.201µs
	I1109 11:21:20.015789   39160 start.go:96] Skipping create...Using existing machine configuration
	I1109 11:21:20.015800   39160 fix.go:55] fixHost starting: 
	I1109 11:21:20.016072   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:20.073071   39160 fix.go:103] recreateIfNeeded on newest-cni-112024: state=Stopped err=<nil>
	W1109 11:21:20.073102   39160 fix.go:129] unexpected machine state, will restart: <nil>
	I1109 11:21:20.094703   39160 out.go:177] * Restarting existing docker container for "newest-cni-112024" ...
	I1109 11:21:20.115674   39160 cli_runner.go:164] Run: docker start newest-cni-112024
	I1109 11:21:20.441087   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:20.499587   39160 kic.go:415] container "newest-cni-112024" state is running.
	I1109 11:21:20.500177   39160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-112024
	I1109 11:21:20.559949   39160 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/config.json ...
	I1109 11:21:20.560442   39160 machine.go:88] provisioning docker machine ...
	I1109 11:21:20.560468   39160 ubuntu.go:169] provisioning hostname "newest-cni-112024"
	I1109 11:21:20.560590   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:20.622097   39160 main.go:134] libmachine: Using SSH client type: native
	I1109 11:21:20.622310   39160 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 49707 <nil> <nil>}
	I1109 11:21:20.622325   39160 main.go:134] libmachine: About to run SSH command:
	sudo hostname newest-cni-112024 && echo "newest-cni-112024" | sudo tee /etc/hostname
	I1109 11:21:20.754124   39160 main.go:134] libmachine: SSH cmd err, output: <nil>: newest-cni-112024
	
	I1109 11:21:20.754256   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:20.814598   39160 main.go:134] libmachine: Using SSH client type: native
	I1109 11:21:20.814766   39160 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 49707 <nil> <nil>}
	I1109 11:21:20.814781   39160 main.go:134] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-112024' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-112024/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-112024' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1109 11:21:20.931522   39160 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 11:21:20.931540   39160 ubuntu.go:175] set auth options {CertDir:/Users/jenkins/minikube-integration/15331-22028/.minikube CaCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/15331-22028/.minikube}
	I1109 11:21:20.931564   39160 ubuntu.go:177] setting up certificates
	I1109 11:21:20.931572   39160 provision.go:83] configureAuth start
	I1109 11:21:20.931661   39160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-112024
	I1109 11:21:20.990243   39160 provision.go:138] copyHostCerts
	I1109 11:21:20.990343   39160 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem, removing ...
	I1109 11:21:20.990355   39160 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem
	I1109 11:21:20.990455   39160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.pem (1082 bytes)
	I1109 11:21:20.990675   39160 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem, removing ...
	I1109 11:21:20.990683   39160 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem
	I1109 11:21:20.990753   39160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/cert.pem (1123 bytes)
	I1109 11:21:20.990908   39160 exec_runner.go:144] found /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem, removing ...
	I1109 11:21:20.990914   39160 exec_runner.go:207] rm: /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem
	I1109 11:21:20.990979   39160 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/15331-22028/.minikube/key.pem (1675 bytes)
	I1109 11:21:20.991118   39160 provision.go:112] generating server cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem org=jenkins.newest-cni-112024 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube newest-cni-112024]
	I1109 11:21:21.083685   39160 provision.go:172] copyRemoteCerts
	I1109 11:21:21.083755   39160 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1109 11:21:21.083826   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:21.141889   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:21.231846   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1109 11:21:21.249070   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1109 11:21:21.266570   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1109 11:21:21.283949   39160 provision.go:86] duration metric: configureAuth took 352.36787ms
	I1109 11:21:21.283961   39160 ubuntu.go:193] setting minikube options for container-runtime
	I1109 11:21:21.284133   39160 config.go:180] Loaded profile config "newest-cni-112024": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 11:21:21.284214   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:21.341090   39160 main.go:134] libmachine: Using SSH client type: native
	I1109 11:21:21.341244   39160 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 49707 <nil> <nil>}
	I1109 11:21:21.341255   39160 main.go:134] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1109 11:21:21.458534   39160 main.go:134] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1109 11:21:21.458562   39160 ubuntu.go:71] root file system type: overlay
	I1109 11:21:21.458766   39160 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ...
	I1109 11:21:21.458874   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:21.515420   39160 main.go:134] libmachine: Using SSH client type: native
	I1109 11:21:21.515593   39160 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 49707 <nil> <nil>}
	I1109 11:21:21.515647   39160 main.go:134] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1109 11:21:21.642537   39160 main.go:134] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I1109 11:21:21.642666   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:21.699425   39160 main.go:134] libmachine: Using SSH client type: native
	I1109 11:21:21.699575   39160 main.go:134] libmachine: &{{{<nil> 0 [] [] []} docker [0x13e6d00] 0x13e9e80 <nil>  [] 0s} 127.0.0.1 49707 <nil> <nil>}
	I1109 11:21:21.699588   39160 main.go:134] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1109 11:21:21.821380   39160 main.go:134] libmachine: SSH cmd err, output: <nil>: 
	I1109 11:21:21.821394   39160 machine.go:91] provisioned docker machine in 1.26095514s
	I1109 11:21:21.821404   39160 start.go:300] post-start starting for "newest-cni-112024" (driver="docker")
	I1109 11:21:21.821411   39160 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1109 11:21:21.821489   39160 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1109 11:21:21.821554   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:21.879098   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:21.966075   39160 ssh_runner.go:195] Run: cat /etc/os-release
	I1109 11:21:21.969926   39160 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1109 11:21:21.969941   39160 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1109 11:21:21.969949   39160 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1109 11:21:21.969953   39160 info.go:137] Remote host: Ubuntu 20.04.5 LTS
	I1109 11:21:21.969969   39160 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/addons for local assets ...
	I1109 11:21:21.970058   39160 filesync.go:126] Scanning /Users/jenkins/minikube-integration/15331-22028/.minikube/files for local assets ...
	I1109 11:21:21.970225   39160 filesync.go:149] local asset: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem -> 228682.pem in /etc/ssl/certs
	I1109 11:21:21.970407   39160 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1109 11:21:21.977559   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /etc/ssl/certs/228682.pem (1708 bytes)
	I1109 11:21:21.994896   39160 start.go:303] post-start completed in 173.48274ms
	I1109 11:21:21.994979   39160 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 11:21:21.995052   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:22.052090   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:22.137116   39160 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1109 11:21:22.141699   39160 fix.go:57] fixHost completed within 2.125919191s
	I1109 11:21:22.141712   39160 start.go:83] releasing machines lock for "newest-cni-112024", held for 2.125962483s
	I1109 11:21:22.141812   39160 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-112024
	I1109 11:21:22.198522   39160 ssh_runner.go:195] Run: systemctl --version
	I1109 11:21:22.198526   39160 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1109 11:21:22.198594   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:22.198609   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:22.258322   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:22.258508   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:22.397725   39160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1109 11:21:22.405310   39160 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (233 bytes)
	I1109 11:21:22.417548   39160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 11:21:22.487062   39160 ssh_runner.go:195] Run: sudo systemctl restart cri-docker
	I1109 11:21:22.562595   39160 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1109 11:21:22.572765   39160 cruntime.go:273] skipping containerd shutdown because we are bound to it
	I1109 11:21:22.572846   39160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1109 11:21:22.582010   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	image-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1109 11:21:22.594586   39160 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1109 11:21:22.661141   39160 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1109 11:21:22.723833   39160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 11:21:22.793728   39160 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1109 11:21:23.032910   39160 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1109 11:21:23.091777   39160 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1109 11:21:23.167053   39160 ssh_runner.go:195] Run: sudo systemctl start cri-docker.socket
	I1109 11:21:23.176124   39160 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1109 11:21:23.176209   39160 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1109 11:21:23.180105   39160 start.go:472] Will wait 60s for crictl version
	I1109 11:21:23.180162   39160 ssh_runner.go:195] Run: sudo crictl version
	I1109 11:21:23.209364   39160 start.go:481] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  20.10.20
	RuntimeApiVersion:  1.41.0
	I1109 11:21:23.209453   39160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 11:21:23.238328   39160 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1109 11:21:23.320205   39160 out.go:204] * Preparing Kubernetes v1.25.3 on Docker 20.10.20 ...
	I1109 11:21:23.320437   39160 cli_runner.go:164] Run: docker exec -t newest-cni-112024 dig +short host.docker.internal
	I1109 11:21:23.436903   39160 network.go:96] got host ip for mount in container by digging dns: 192.168.65.2
	I1109 11:21:23.437023   39160 ssh_runner.go:195] Run: grep 192.168.65.2	host.minikube.internal$ /etc/hosts
	I1109 11:21:23.441174   39160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.2	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 11:21:23.450768   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:23.529730   39160 out.go:177]   - kubeadm.pod-network-cidr=192.168.111.111/16
	I1109 11:21:23.551487   39160 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 11:21:23.551679   39160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 11:21:23.576825   39160 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 11:21:23.576843   39160 docker.go:543] Images already preloaded, skipping extraction
	I1109 11:21:23.576946   39160 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1109 11:21:23.600374   39160 docker.go:613] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.25.3
	registry.k8s.io/kube-controller-manager:v1.25.3
	registry.k8s.io/kube-scheduler:v1.25.3
	registry.k8s.io/kube-proxy:v1.25.3
	registry.k8s.io/pause:3.8
	registry.k8s.io/etcd:3.5.4-0
	registry.k8s.io/coredns/coredns:v1.9.3
	k8s.gcr.io/pause:3.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1109 11:21:23.600396   39160 cache_images.go:84] Images are preloaded, skipping loading
	I1109 11:21:23.600496   39160 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1109 11:21:23.668141   39160 cni.go:95] Creating CNI manager for ""
	I1109 11:21:23.668156   39160 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:21:23.668171   39160 kubeadm.go:87] Using pod CIDR: 192.168.111.111/16
	I1109 11:21:23.668190   39160 kubeadm.go:156] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:192.168.111.111/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.25.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-112024 NodeName:newest-cni-112024 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArg
s:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false}
	I1109 11:21:23.668320   39160 kubeadm.go:161] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/cri-dockerd.sock
	  name: "newest-cni-112024"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.25.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "192.168.111.111/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "192.168.111.111/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1109 11:21:23.668412   39160 kubeadm.go:962] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.25.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/cri-dockerd.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-112024 --image-service-endpoint=/var/run/cri-dockerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.25.3 ClusterName:newest-cni-112024 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1109 11:21:23.668484   39160 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.25.3
	I1109 11:21:23.675916   39160 binaries.go:44] Found k8s binaries, skipping transfer
	I1109 11:21:23.675991   39160 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1109 11:21:23.682803   39160 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (516 bytes)
	I1109 11:21:23.695359   39160 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1109 11:21:23.707382   39160 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2175 bytes)
	I1109 11:21:23.719747   39160 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1109 11:21:23.723250   39160 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1109 11:21:23.732456   39160 certs.go:54] Setting up /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024 for IP: 192.168.76.2
	I1109 11:21:23.732579   39160 certs.go:182] skipping minikubeCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key
	I1109 11:21:23.732638   39160 certs.go:182] skipping proxyClientCA CA generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key
	I1109 11:21:23.732729   39160 certs.go:298] skipping minikube-user signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/client.key
	I1109 11:21:23.732789   39160 certs.go:298] skipping minikube signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/apiserver.key.31bdca25
	I1109 11:21:23.732867   39160 certs.go:298] skipping aggregator signed cert generation: /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/proxy-client.key
	I1109 11:21:23.733124   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem (1338 bytes)
	W1109 11:21:23.733166   39160 certs.go:384] ignoring /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868_empty.pem, impossibly tiny 0 bytes
	I1109 11:21:23.733178   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca-key.pem (1675 bytes)
	I1109 11:21:23.733212   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/ca.pem (1082 bytes)
	I1109 11:21:23.733250   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/cert.pem (1123 bytes)
	I1109 11:21:23.733282   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/certs/key.pem (1675 bytes)
	I1109 11:21:23.733362   39160 certs.go:388] found cert: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem (1708 bytes)
	I1109 11:21:23.733920   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1109 11:21:23.750892   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1109 11:21:23.768167   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1109 11:21:23.785946   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/newest-cni-112024/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1109 11:21:23.804780   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1109 11:21:23.823985   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1109 11:21:23.840615   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1109 11:21:23.859237   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1109 11:21:23.875884   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/certs/22868.pem --> /usr/share/ca-certificates/22868.pem (1338 bytes)
	I1109 11:21:23.893260   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/ssl/certs/228682.pem --> /usr/share/ca-certificates/228682.pem (1708 bytes)
	I1109 11:21:23.910190   39160 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/15331-22028/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1109 11:21:23.926752   39160 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1109 11:21:23.939587   39160 ssh_runner.go:195] Run: openssl version
	I1109 11:21:23.944934   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22868.pem && ln -fs /usr/share/ca-certificates/22868.pem /etc/ssl/certs/22868.pem"
	I1109 11:21:23.952917   39160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22868.pem
	I1109 11:21:23.957001   39160 certs.go:431] hashing: -rw-r--r-- 1 root root 1338 Nov  9 18:08 /usr/share/ca-certificates/22868.pem
	I1109 11:21:23.957054   39160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22868.pem
	I1109 11:21:23.962494   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/22868.pem /etc/ssl/certs/51391683.0"
	I1109 11:21:23.969644   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/228682.pem && ln -fs /usr/share/ca-certificates/228682.pem /etc/ssl/certs/228682.pem"
	I1109 11:21:23.977860   39160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/228682.pem
	I1109 11:21:23.981699   39160 certs.go:431] hashing: -rw-r--r-- 1 root root 1708 Nov  9 18:08 /usr/share/ca-certificates/228682.pem
	I1109 11:21:23.981752   39160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/228682.pem
	I1109 11:21:23.987127   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/228682.pem /etc/ssl/certs/3ec20f2e.0"
	I1109 11:21:23.994374   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1109 11:21:24.002343   39160 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:21:24.006209   39160 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Nov  9 18:04 /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:21:24.006266   39160 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1109 11:21:24.011403   39160 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1109 11:21:24.018420   39160 kubeadm.go:396] StartCluster: {Name:newest-cni-112024 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:newest-cni-112024 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDo
main:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:192.168.111.111/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:k8s.gcr.io/echoserver:1.4 MetricsServer:k8s.gcr.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraD
isks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 11:21:24.018543   39160 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 11:21:24.041143   39160 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1109 11:21:24.048733   39160 kubeadm.go:411] found existing configuration files, will attempt cluster restart
	I1109 11:21:24.048750   39160 kubeadm.go:627] restartCluster start
	I1109 11:21:24.048809   39160 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1109 11:21:24.055552   39160 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:24.055646   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:24.113955   39160 kubeconfig.go:135] verify returned: extract IP: "newest-cni-112024" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 11:21:24.114135   39160 kubeconfig.go:146] "newest-cni-112024" context is missing from /Users/jenkins/minikube-integration/15331-22028/kubeconfig - will repair!
	I1109 11:21:24.114459   39160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/kubeconfig: {Name:mk02bb1c68cad934afd737965b2dbda8f5a4ba2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:21:24.115814   39160 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1109 11:21:24.123333   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:24.123386   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:24.131539   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:24.333659   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:24.333851   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:24.344340   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:24.531707   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:24.531794   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:24.541194   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:24.733703   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:24.733917   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:24.744705   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:24.933657   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:24.933853   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:24.944220   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:25.133656   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:25.133822   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:25.144493   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:25.332287   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:25.332399   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:25.341232   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:25.532871   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:25.532995   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:25.543611   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:25.733749   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:25.733880   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:25.745212   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:25.932416   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:25.932543   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:25.942796   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:26.133697   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:26.133885   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:26.144623   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:26.333706   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:26.333909   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:26.345355   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:26.533690   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:26.533903   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:26.545240   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:26.733654   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:26.733834   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:26.744812   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:26.932078   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:26.932214   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:26.942869   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:27.131916   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:27.132050   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:27.143076   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:27.143086   39160 api_server.go:165] Checking apiserver status ...
	I1109 11:21:27.143146   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1109 11:21:27.151214   39160 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:27.151226   39160 kubeadm.go:602] needs reconfigure: apiserver error: timed out waiting for the condition
	I1109 11:21:27.151234   39160 kubeadm.go:1114] stopping kube-system containers ...
	I1109 11:21:27.151313   39160 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1109 11:21:27.175295   39160 docker.go:444] Stopping containers: [69ad812ba421 a1ffed4fd8c7 034bc5adb025 ec85c77e6fe3 ff5b37456038 05384ace7dfb 662a3d22b99f 8e643aa63efa bb2c6ce3933d e4fa0ccc8dd0 f1b0990aaac6 d6eac9e51a3c 4f5537f577af 5d9de125dd0d 9d2f4a7ccb70 e356058b0875]
	I1109 11:21:27.175407   39160 ssh_runner.go:195] Run: docker stop 69ad812ba421 a1ffed4fd8c7 034bc5adb025 ec85c77e6fe3 ff5b37456038 05384ace7dfb 662a3d22b99f 8e643aa63efa bb2c6ce3933d e4fa0ccc8dd0 f1b0990aaac6 d6eac9e51a3c 4f5537f577af 5d9de125dd0d 9d2f4a7ccb70 e356058b0875
	I1109 11:21:27.198801   39160 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1109 11:21:27.208997   39160 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1109 11:21:27.216555   39160 kubeadm.go:155] found existing configuration files:
	-rw------- 1 root root 5639 Nov  9 19:20 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Nov  9 19:20 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 Nov  9 19:20 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Nov  9 19:20 /etc/kubernetes/scheduler.conf
	
	I1109 11:21:27.216616   39160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1109 11:21:27.223941   39160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1109 11:21:27.231428   39160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1109 11:21:27.238956   39160 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:27.239014   39160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1109 11:21:27.246096   39160 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1109 11:21:27.253303   39160 kubeadm.go:166] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1109 11:21:27.253372   39160 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1109 11:21:27.260298   39160 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1109 11:21:27.267773   39160 kubeadm.go:704] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I1109 11:21:27.267786   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:27.317803   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:27.812620   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:27.940417   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:27.992684   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:28.083541   39160 api_server.go:51] waiting for apiserver process to appear ...
	I1109 11:21:28.083619   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:21:28.595563   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:21:29.094246   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:21:29.160583   39160 api_server.go:71] duration metric: took 1.077053042s to wait for apiserver process to appear ...
	I1109 11:21:29.160604   39160 api_server.go:87] waiting for apiserver healthz status ...
	I1109 11:21:29.160624   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:29.161900   39160 api_server.go:268] stopped: https://127.0.0.1:49706/healthz: Get "https://127.0.0.1:49706/healthz": EOF
	I1109 11:21:29.663027   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:31.868827   39160 api_server.go:278] https://127.0.0.1:49706/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 11:21:31.868846   39160 api_server.go:102] status: https://127.0.0.1:49706/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 11:21:32.163604   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:32.170966   39160 api_server.go:278] https://127.0.0.1:49706/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 11:21:32.170993   39160 api_server.go:102] status: https://127.0.0.1:49706/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 11:21:32.662313   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:32.668140   39160 api_server.go:278] https://127.0.0.1:49706/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1109 11:21:32.668156   39160 api_server.go:102] status: https://127.0.0.1:49706/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1109 11:21:33.162003   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:33.168204   39160 api_server.go:278] https://127.0.0.1:49706/healthz returned 200:
	ok
	I1109 11:21:33.175497   39160 api_server.go:140] control plane version: v1.25.3
	I1109 11:21:33.175514   39160 api_server.go:130] duration metric: took 4.0149409s to wait for apiserver health ...
	I1109 11:21:33.175521   39160 cni.go:95] Creating CNI manager for ""
	I1109 11:21:33.175530   39160 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 11:21:33.175543   39160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 11:21:33.183186   39160 system_pods.go:59] 8 kube-system pods found
	I1109 11:21:33.183203   39160 system_pods.go:61] "coredns-565d847f94-c62vb" [587a714d-b418-44bc-9040-50008d1ddd27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 11:21:33.183209   39160 system_pods.go:61] "etcd-newest-cni-112024" [cf6064fa-e5b6-40b3-bd80-65c64b4947ea] Running
	I1109 11:21:33.183213   39160 system_pods.go:61] "kube-apiserver-newest-cni-112024" [f15ac211-a183-47c9-9190-aa7c5ef9d845] Running
	I1109 11:21:33.183217   39160 system_pods.go:61] "kube-controller-manager-newest-cni-112024" [677a9613-cd21-4af4-a753-94a2414d2d82] Running
	I1109 11:21:33.183222   39160 system_pods.go:61] "kube-proxy-n9s2b" [1fcf5391-a216-431b-9b90-42578a36915a] Running
	I1109 11:21:33.183228   39160 system_pods.go:61] "kube-scheduler-newest-cni-112024" [7c0ddce7-09ba-4073-8bf9-885064d664a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 11:21:33.183235   39160 system_pods.go:61] "metrics-server-5c8fd5cf8-swf96" [668082b5-81b6-4c62-be89-56ddf1564689] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 11:21:33.183239   39160 system_pods.go:61] "storage-provisioner" [f7f00579-9d08-494c-ad8b-2b43d998452e] Running
	I1109 11:21:33.183243   39160 system_pods.go:74] duration metric: took 7.693928ms to wait for pod list to return data ...
	I1109 11:21:33.183250   39160 node_conditions.go:102] verifying NodePressure condition ...
	I1109 11:21:33.186289   39160 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 11:21:33.186304   39160 node_conditions.go:123] node cpu capacity is 6
	I1109 11:21:33.186315   39160 node_conditions.go:105] duration metric: took 3.060066ms to run NodePressure ...
	I1109 11:21:33.186331   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1109 11:21:33.467116   39160 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1109 11:21:33.476211   39160 ops.go:34] apiserver oom_adj: -16
	I1109 11:21:33.476230   39160 kubeadm.go:631] restartCluster took 9.427560742s
	I1109 11:21:33.476245   39160 kubeadm.go:398] StartCluster complete in 9.45791491s
	I1109 11:21:33.476266   39160 settings.go:142] acquiring lock: {Name:mke93232301b59b22d43a378e933baa222d3feda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:21:33.476350   39160 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 11:21:33.478291   39160 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/kubeconfig: {Name:mk02bb1c68cad934afd737965b2dbda8f5a4ba2d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 11:21:33.481545   39160 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "newest-cni-112024" rescaled to 1
	I1109 11:21:33.481582   39160 start.go:212] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1109 11:21:33.481597   39160 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.25.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1109 11:21:33.481634   39160 addons.go:486] enableAddons start: toEnable=map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true], additional=[]
	I1109 11:21:33.481850   39160 config.go:180] Loaded profile config "newest-cni-112024": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 11:21:33.526553   39160 out.go:177] * Verifying Kubernetes components...
	I1109 11:21:33.526629   39160 addons.go:65] Setting storage-provisioner=true in profile "newest-cni-112024"
	I1109 11:21:33.526629   39160 addons.go:65] Setting dashboard=true in profile "newest-cni-112024"
	I1109 11:21:33.547675   39160 addons.go:227] Setting addon storage-provisioner=true in "newest-cni-112024"
	I1109 11:21:33.547679   39160 addons.go:227] Setting addon dashboard=true in "newest-cni-112024"
	I1109 11:21:33.526634   39160 addons.go:65] Setting default-storageclass=true in profile "newest-cni-112024"
	W1109 11:21:33.547687   39160 addons.go:236] addon dashboard should already be in state true
	I1109 11:21:33.547707   39160 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 11:21:33.526640   39160 addons.go:65] Setting metrics-server=true in profile "newest-cni-112024"
	I1109 11:21:33.547718   39160 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-112024"
	I1109 11:21:33.547727   39160 addons.go:227] Setting addon metrics-server=true in "newest-cni-112024"
	W1109 11:21:33.547733   39160 addons.go:236] addon metrics-server should already be in state true
	W1109 11:21:33.547687   39160 addons.go:236] addon storage-provisioner should already be in state true
	I1109 11:21:33.547751   39160 host.go:66] Checking if "newest-cni-112024" exists ...
	I1109 11:21:33.547767   39160 host.go:66] Checking if "newest-cni-112024" exists ...
	I1109 11:21:33.547794   39160 host.go:66] Checking if "newest-cni-112024" exists ...
	I1109 11:21:33.548040   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:33.548103   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:33.548150   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:33.548177   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:33.678554   39160 out.go:177]   - Using image fake.domain/k8s.gcr.io/echoserver:1.4
	I1109 11:21:33.656499   39160 addons.go:227] Setting addon default-storageclass=true in "newest-cni-112024"
	I1109 11:21:33.675486   39160 start.go:806] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I1109 11:21:33.675541   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:33.714500   39160 addons.go:419] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1109 11:21:33.735893   39160 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1109 11:21:33.772708   39160 out.go:177]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	W1109 11:21:33.772721   39160 addons.go:236] addon default-storageclass should already be in state true
	I1109 11:21:33.772722   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1109 11:21:33.809800   39160 addons.go:419] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 11:21:33.830710   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1109 11:21:33.830722   39160 host.go:66] Checking if "newest-cni-112024" exists ...
	I1109 11:21:33.830865   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:33.867721   39160 out.go:177]   - Using image k8s.gcr.io/echoserver:1.4
	I1109 11:21:33.830878   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:33.831240   39160 cli_runner.go:164] Run: docker container inspect newest-cni-112024 --format={{.State.Status}}
	I1109 11:21:33.904756   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1109 11:21:33.904778   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1109 11:21:33.905475   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:33.919339   39160 api_server.go:51] waiting for apiserver process to appear ...
	I1109 11:21:33.919452   39160 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 11:21:33.940329   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:33.948627   39160 api_server.go:71] duration metric: took 467.028048ms to wait for apiserver process to appear ...
	I1109 11:21:33.948698   39160 api_server.go:87] waiting for apiserver healthz status ...
	I1109 11:21:33.948713   39160 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:49706/healthz ...
	I1109 11:21:33.959880   39160 api_server.go:278] https://127.0.0.1:49706/healthz returned 200:
	ok
	I1109 11:21:33.961634   39160 api_server.go:140] control plane version: v1.25.3
	I1109 11:21:33.961649   39160 api_server.go:130] duration metric: took 12.941285ms to wait for apiserver health ...
	I1109 11:21:33.961658   39160 system_pods.go:43] waiting for kube-system pods to appear ...
	I1109 11:21:33.969813   39160 system_pods.go:59] 8 kube-system pods found
	I1109 11:21:33.969850   39160 system_pods.go:61] "coredns-565d847f94-c62vb" [587a714d-b418-44bc-9040-50008d1ddd27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1109 11:21:33.969860   39160 system_pods.go:61] "etcd-newest-cni-112024" [cf6064fa-e5b6-40b3-bd80-65c64b4947ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1109 11:21:33.969870   39160 system_pods.go:61] "kube-apiserver-newest-cni-112024" [f15ac211-a183-47c9-9190-aa7c5ef9d845] Running
	I1109 11:21:33.969879   39160 system_pods.go:61] "kube-controller-manager-newest-cni-112024" [677a9613-cd21-4af4-a753-94a2414d2d82] Running
	I1109 11:21:33.969885   39160 system_pods.go:61] "kube-proxy-n9s2b" [1fcf5391-a216-431b-9b90-42578a36915a] Running
	I1109 11:21:33.969904   39160 system_pods.go:61] "kube-scheduler-newest-cni-112024" [7c0ddce7-09ba-4073-8bf9-885064d664a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1109 11:21:33.969921   39160 system_pods.go:61] "metrics-server-5c8fd5cf8-swf96" [668082b5-81b6-4c62-be89-56ddf1564689] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1109 11:21:33.969931   39160 system_pods.go:61] "storage-provisioner" [f7f00579-9d08-494c-ad8b-2b43d998452e] Running
	I1109 11:21:33.969939   39160 system_pods.go:74] duration metric: took 8.275476ms to wait for pod list to return data ...
	I1109 11:21:33.969954   39160 default_sa.go:34] waiting for default service account to be created ...
	I1109 11:21:33.973693   39160 default_sa.go:45] found service account: "default"
	I1109 11:21:33.973713   39160 default_sa.go:55] duration metric: took 3.751162ms for default service account to be created ...
	I1109 11:21:33.973725   39160 kubeadm.go:573] duration metric: took 492.129941ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I1109 11:21:33.973745   39160 node_conditions.go:102] verifying NodePressure condition ...
	I1109 11:21:33.977462   39160 node_conditions.go:122] node storage ephemeral capacity is 115273188Ki
	I1109 11:21:33.977479   39160 node_conditions.go:123] node cpu capacity is 6
	I1109 11:21:33.977489   39160 node_conditions.go:105] duration metric: took 3.739667ms to run NodePressure ...
	I1109 11:21:33.977501   39160 start.go:217] waiting for startup goroutines ...
	I1109 11:21:33.998152   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:33.999612   39160 addons.go:419] installing /etc/kubernetes/addons/storageclass.yaml
	I1109 11:21:33.999629   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1109 11:21:33.999730   39160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-112024
	I1109 11:21:34.001886   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:34.068959   39160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49707 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/newest-cni-112024/id_rsa Username:docker}
	I1109 11:21:34.074072   39160 addons.go:419] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1109 11:21:34.074084   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1820 bytes)
	I1109 11:21:34.100522   39160 addons.go:419] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1109 11:21:34.100539   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1109 11:21:34.154472   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1109 11:21:34.154484   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1109 11:21:34.160052   39160 addons.go:419] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 11:21:34.160066   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1109 11:21:34.166248   39160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1109 11:21:34.176599   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1109 11:21:34.176623   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1109 11:21:34.177280   39160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1109 11:21:34.194194   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1109 11:21:34.194207   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1109 11:21:34.250400   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1109 11:21:34.250419   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4206 bytes)
	I1109 11:21:34.253560   39160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1109 11:21:34.273242   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1109 11:21:34.288106   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1109 11:21:34.362206   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1109 11:21:34.362218   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1109 11:21:34.473870   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1109 11:21:34.473887   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1109 11:21:34.556895   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1109 11:21:34.556911   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1109 11:21:34.578415   39160 addons.go:419] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 11:21:34.578431   39160 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1109 11:21:34.596247   39160 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1109 11:21:35.463340   39160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.297079899s)
	I1109 11:21:35.470083   39160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.292789582s)
	I1109 11:21:35.470109   39160 addons.go:457] Verifying addon metrics-server=true in "newest-cni-112024"
	I1109 11:21:35.470131   39160 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.25.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.182052729s)
	I1109 11:21:35.595388   39160 out.go:177] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-112024 addons enable metrics-server	
	
	
	I1109 11:21:35.637444   39160 out.go:177] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1109 11:21:35.712357   39160 addons.go:488] enableAddons completed in 2.230736857s
	I1109 11:21:35.712756   39160 ssh_runner.go:195] Run: rm -f paused
	I1109 11:21:35.752187   39160 start.go:506] kubectl: 1.25.2, cluster: 1.25.3 (minor skew: 0)
	I1109 11:21:35.773394   39160 out.go:177] * Done! kubectl is now configured to use "newest-cni-112024" cluster and "default" namespace by default
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2022-11-09 19:06:03 UTC, end at Wed 2022-11-09 19:32:58 UTC. --
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Stopping Docker Application Container Engine...
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[132]: time="2022-11-09T19:06:05.637484207Z" level=info msg="Processing signal 'terminated'"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[132]: time="2022-11-09T19:06:05.638447358Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[132]: time="2022-11-09T19:06:05.638987991Z" level=info msg="Daemon shutdown complete"
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: docker.service: Succeeded.
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Stopped Docker Application Container Engine.
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Starting Docker Application Container Engine...
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.689235014Z" level=info msg="Starting up"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690811504Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690844834Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690861011Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.690872322Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691897097Z" level=info msg="parsed scheme: \"unix\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691925520Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691937393Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.691943086Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.695241631Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.699296631Z" level=info msg="Loading containers: start."
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.776641771Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.808607293Z" level=info msg="Loading containers: done."
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.816483948Z" level=info msg="Docker daemon" commit=03df974 graphdriver(s)=overlay2 version=20.10.20
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.816574036Z" level=info msg="Daemon has completed initialization"
	Nov 09 19:06:05 old-k8s-version-110019 systemd[1]: Started Docker Application Container Engine.
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.837400763Z" level=info msg="API listen on [::]:2376"
	Nov 09 19:06:05 old-k8s-version-110019 dockerd[426]: time="2022-11-09T19:06:05.842877088Z" level=info msg="API listen on /var/run/docker.sock"
	
	* 
	* ==> container status <==
	* time="2022-11-09T19:33:00Z" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
	CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES
	
	* 
	* ==> describe nodes <==
	* 
	* ==> dmesg <==
	* 
	* 
	* ==> kernel <==
	*  19:33:00 up  4:32,  0 users,  load average: 0.08, 0.22, 0.55
	Linux old-k8s-version-110019 5.15.49-linuxkit #1 SMP Tue Sep 13 07:51:46 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.5 LTS"
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2022-11-09 19:06:03 UTC, end at Wed 2022-11-09 19:33:00 UTC. --
	Nov 09 19:32:59 old-k8s-version-110019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 09 19:32:59 old-k8s-version-110019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1667.
	Nov 09 19:32:59 old-k8s-version-110019 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 09 19:32:59 old-k8s-version-110019 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 09 19:32:59 old-k8s-version-110019 kubelet[34261]: I1109 19:32:59.914322   34261 server.go:410] Version: v1.16.0
	Nov 09 19:32:59 old-k8s-version-110019 kubelet[34261]: I1109 19:32:59.914564   34261 plugins.go:100] No cloud provider specified.
	Nov 09 19:32:59 old-k8s-version-110019 kubelet[34261]: I1109 19:32:59.914586   34261 server.go:773] Client rotation is on, will bootstrap in background
	Nov 09 19:32:59 old-k8s-version-110019 kubelet[34261]: I1109 19:32:59.916650   34261 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 09 19:32:59 old-k8s-version-110019 kubelet[34261]: W1109 19:32:59.917203   34261 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 09 19:32:59 old-k8s-version-110019 kubelet[34261]: W1109 19:32:59.917291   34261 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 09 19:32:59 old-k8s-version-110019 kubelet[34261]: F1109 19:32:59.917341   34261 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 09 19:32:59 old-k8s-version-110019 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 09 19:32:59 old-k8s-version-110019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Nov 09 19:33:00 old-k8s-version-110019 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1668.
	Nov 09 19:33:00 old-k8s-version-110019 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Nov 09 19:33:00 old-k8s-version-110019 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Nov 09 19:33:00 old-k8s-version-110019 kubelet[34297]: I1109 19:33:00.662647   34297 server.go:410] Version: v1.16.0
	Nov 09 19:33:00 old-k8s-version-110019 kubelet[34297]: I1109 19:33:00.662891   34297 plugins.go:100] No cloud provider specified.
	Nov 09 19:33:00 old-k8s-version-110019 kubelet[34297]: I1109 19:33:00.662901   34297 server.go:773] Client rotation is on, will bootstrap in background
	Nov 09 19:33:00 old-k8s-version-110019 kubelet[34297]: I1109 19:33:00.664710   34297 certificate_store.go:129] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Nov 09 19:33:00 old-k8s-version-110019 kubelet[34297]: W1109 19:33:00.665568   34297 server.go:613] failed to get the kubelet's cgroup: mountpoint for cpu not found.  Kubelet system container metrics may be missing.
	Nov 09 19:33:00 old-k8s-version-110019 kubelet[34297]: W1109 19:33:00.665671   34297 server.go:620] failed to get the container runtime's cgroup: failed to get container name for docker process: mountpoint for cpu not found. Runtime system container metrics may be missing.
	Nov 09 19:33:00 old-k8s-version-110019 kubelet[34297]: F1109 19:33:00.665745   34297 server.go:271] failed to run Kubelet: mountpoint for cpu not found
	Nov 09 19:33:00 old-k8s-version-110019 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Nov 09 19:33:00 old-k8s-version-110019 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 11:33:00.485500   40055 logs.go:192] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
	! unable to fetch logs for: describe nodes

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-110019 -n old-k8s-version-110019
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 2 (390.110096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-110019" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (554.70s)

                                                
                                    

Test pass (261/295)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 13.06
4 TestDownloadOnly/v1.16.0/preload-exists 0
7 TestDownloadOnly/v1.16.0/kubectl 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.3
10 TestDownloadOnly/v1.25.3/json-events 6.67
11 TestDownloadOnly/v1.25.3/preload-exists 0
14 TestDownloadOnly/v1.25.3/kubectl 0
15 TestDownloadOnly/v1.25.3/LogsDuration 0.29
16 TestDownloadOnly/DeleteAll 0.67
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.39
18 TestDownloadOnlyKic 10.05
19 TestBinaryMirror 1.71
20 TestOffline 86.14
22 TestAddons/Setup 147.85
26 TestAddons/parallel/MetricsServer 5.56
27 TestAddons/parallel/HelmTiller 13
29 TestAddons/parallel/CSI 39.56
30 TestAddons/parallel/Headlamp 12.49
31 TestAddons/parallel/CloudSpanner 5.48
33 TestAddons/serial/GCPAuth 15.53
34 TestAddons/StoppedEnableDisable 12.93
35 TestCertOptions 30.86
36 TestCertExpiration 237.04
37 TestDockerFlags 33.68
38 TestForceSystemdFlag 31.4
39 TestForceSystemdEnv 30.49
41 TestHyperKitDriverInstallOrUpdate 7.77
44 TestErrorSpam/setup 27.47
45 TestErrorSpam/start 2.38
46 TestErrorSpam/status 1.23
47 TestErrorSpam/pause 1.79
48 TestErrorSpam/unpause 1.91
49 TestErrorSpam/stop 13.06
52 TestFunctional/serial/CopySyncFile 0
53 TestFunctional/serial/StartWithProxy 90.99
54 TestFunctional/serial/AuditLog 0
55 TestFunctional/serial/SoftStart 36.82
56 TestFunctional/serial/KubeContext 0.04
57 TestFunctional/serial/KubectlGetPods 0.08
60 TestFunctional/serial/CacheCmd/cache/add_remote 5.89
61 TestFunctional/serial/CacheCmd/cache/add_local 1.81
62 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.08
63 TestFunctional/serial/CacheCmd/cache/list 0.08
64 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.43
65 TestFunctional/serial/CacheCmd/cache/cache_reload 2.4
66 TestFunctional/serial/CacheCmd/cache/delete 0.16
67 TestFunctional/serial/MinikubeKubectlCmd 0.51
68 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.65
69 TestFunctional/serial/ExtraConfig 44.1
70 TestFunctional/serial/ComponentHealth 0.06
71 TestFunctional/serial/LogsCmd 3.14
72 TestFunctional/serial/LogsFileCmd 3.05
74 TestFunctional/parallel/ConfigCmd 0.51
75 TestFunctional/parallel/DashboardCmd 17.22
76 TestFunctional/parallel/DryRun 1.48
77 TestFunctional/parallel/InternationalLanguage 0.71
78 TestFunctional/parallel/StatusCmd 1.41
81 TestFunctional/parallel/ServiceCmd 19.85
83 TestFunctional/parallel/AddonsCmd 0.26
84 TestFunctional/parallel/PersistentVolumeClaim 28.91
86 TestFunctional/parallel/SSHCmd 0.81
87 TestFunctional/parallel/CpCmd 2.07
88 TestFunctional/parallel/MySQL 24.33
89 TestFunctional/parallel/FileSync 0.43
90 TestFunctional/parallel/CertSync 2.58
94 TestFunctional/parallel/NodeLabels 0.07
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
98 TestFunctional/parallel/License 0.58
99 TestFunctional/parallel/Version/short 0.1
100 TestFunctional/parallel/Version/components 0.71
101 TestFunctional/parallel/ImageCommands/ImageListShort 0.31
102 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
103 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
104 TestFunctional/parallel/ImageCommands/ImageListYaml 0.31
105 TestFunctional/parallel/ImageCommands/ImageBuild 3.91
106 TestFunctional/parallel/ImageCommands/Setup 2.51
107 TestFunctional/parallel/DockerEnv/bash 1.87
108 TestFunctional/parallel/UpdateContextCmd/no_changes 0.3
109 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.4
110 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.29
111 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.32
112 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.33
113 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8
114 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.7
115 TestFunctional/parallel/ImageCommands/ImageRemove 0.76
116 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 2.15
117 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 3.01
119 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
121 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
123 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
127 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.61
129 TestFunctional/parallel/ProfileCmd/profile_list 0.52
130 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
131 TestFunctional/parallel/MountCmd/any-port 8.9
132 TestFunctional/parallel/MountCmd/specific-port 2.26
133 TestFunctional/delete_addon-resizer_images 0.15
134 TestFunctional/delete_my-image_image 0.06
135 TestFunctional/delete_minikube_cached_images 0.06
145 TestJSONOutput/start/Command 79.94
146 TestJSONOutput/start/Audit 0
148 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
149 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
151 TestJSONOutput/pause/Command 0.63
152 TestJSONOutput/pause/Audit 0
154 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
155 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
157 TestJSONOutput/unpause/Command 0.58
158 TestJSONOutput/unpause/Audit 0
160 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
161 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
163 TestJSONOutput/stop/Command 12.29
164 TestJSONOutput/stop/Audit 0
166 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
167 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
168 TestErrorJSONOutput 0.75
170 TestKicCustomNetwork/create_custom_network 30.74
171 TestKicCustomNetwork/use_default_bridge_network 29.76
172 TestKicExistingNetwork 29.54
173 TestKicCustomSubnet 28.67
174 TestMainNoArgs 0.08
175 TestMinikubeProfile 60.71
178 TestMountStart/serial/StartWithMountFirst 7.2
179 TestMountStart/serial/VerifyMountFirst 0.4
180 TestMountStart/serial/StartWithMountSecond 7.19
181 TestMountStart/serial/VerifyMountSecond 0.39
182 TestMountStart/serial/DeleteFirst 2.13
183 TestMountStart/serial/VerifyMountPostDelete 0.4
184 TestMountStart/serial/Stop 1.59
185 TestMountStart/serial/RestartStopped 5.4
186 TestMountStart/serial/VerifyMountPostStop 0.39
189 TestMultiNode/serial/FreshStart2Nodes 83.07
190 TestMultiNode/serial/DeployApp2Nodes 5.6
191 TestMultiNode/serial/PingHostFrom2Pods 0.9
192 TestMultiNode/serial/AddNode 24.92
193 TestMultiNode/serial/ProfileList 0.43
194 TestMultiNode/serial/CopyFile 14.35
195 TestMultiNode/serial/StopNode 13.75
196 TestMultiNode/serial/StartAfterStop 19.21
197 TestMultiNode/serial/RestartKeepsNodes 115.63
198 TestMultiNode/serial/DeleteNode 16.83
199 TestMultiNode/serial/StopMultiNode 24.88
201 TestMultiNode/serial/ValidateNameConflict 31.63
205 TestPreload 146.06
207 TestScheduledStopUnix 101.89
208 TestSkaffold 59.97
210 TestInsufficientStorage 12.27
226 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 7.51
227 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 10.13
228 TestStoppedBinaryUpgrade/Setup 0.98
230 TestStoppedBinaryUpgrade/MinikubeLogs 3.56
232 TestPause/serial/Start 89.94
233 TestPause/serial/SecondStartNoReconfiguration 56.78
234 TestPause/serial/Pause 0.87
235 TestPause/serial/VerifyStatus 0.49
236 TestPause/serial/Unpause 0.91
237 TestPause/serial/PauseAgain 1.04
238 TestPause/serial/DeletePaused 2.91
239 TestPause/serial/VerifyDeletedResources 0.58
248 TestNoKubernetes/serial/StartNoK8sWithVersion 0.42
249 TestNoKubernetes/serial/StartWithK8s 29.48
250 TestNoKubernetes/serial/StartWithStopK8s 8.11
251 TestNoKubernetes/serial/Start 6.45
252 TestNoKubernetes/serial/VerifyK8sNotRunning 0.38
253 TestNoKubernetes/serial/ProfileList 15.16
254 TestNoKubernetes/serial/Stop 1.58
255 TestNoKubernetes/serial/StartNoArgs 4.1
256 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
257 TestNetworkPlugins/group/auto/Start 44.65
258 TestNetworkPlugins/group/auto/KubeletFlags 0.4
259 TestNetworkPlugins/group/auto/NetCatPod 13.22
260 TestNetworkPlugins/group/auto/DNS 0.12
261 TestNetworkPlugins/group/auto/Localhost 0.12
262 TestNetworkPlugins/group/auto/HairPin 5.11
263 TestNetworkPlugins/group/kindnet/Start 48.79
264 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
265 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
266 TestNetworkPlugins/group/kindnet/NetCatPod 11.19
267 TestNetworkPlugins/group/kindnet/DNS 0.11
268 TestNetworkPlugins/group/kindnet/Localhost 0.11
269 TestNetworkPlugins/group/kindnet/HairPin 0.12
270 TestNetworkPlugins/group/cilium/Start 98.16
271 TestNetworkPlugins/group/cilium/ControllerPod 5.02
272 TestNetworkPlugins/group/cilium/KubeletFlags 0.48
273 TestNetworkPlugins/group/cilium/NetCatPod 13.67
274 TestNetworkPlugins/group/calico/Start 325.7
275 TestNetworkPlugins/group/cilium/DNS 0.12
276 TestNetworkPlugins/group/cilium/Localhost 0.12
277 TestNetworkPlugins/group/cilium/HairPin 0.13
278 TestNetworkPlugins/group/false/Start 44.5
279 TestNetworkPlugins/group/false/KubeletFlags 0.44
280 TestNetworkPlugins/group/false/NetCatPod 14.21
281 TestNetworkPlugins/group/false/DNS 0.13
282 TestNetworkPlugins/group/false/Localhost 0.11
283 TestNetworkPlugins/group/false/HairPin 5.11
284 TestNetworkPlugins/group/bridge/Start 43.1
285 TestNetworkPlugins/group/bridge/KubeletFlags 0.43
286 TestNetworkPlugins/group/bridge/NetCatPod 12.19
287 TestNetworkPlugins/group/bridge/DNS 0.13
288 TestNetworkPlugins/group/bridge/Localhost 0.11
289 TestNetworkPlugins/group/bridge/HairPin 0.12
290 TestNetworkPlugins/group/enable-default-cni/Start 79.3
291 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.46
292 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.19
293 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
294 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
295 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
296 TestNetworkPlugins/group/kubenet/Start 49.86
297 TestNetworkPlugins/group/kubenet/KubeletFlags 0.4
298 TestNetworkPlugins/group/kubenet/NetCatPod 13.2
299 TestNetworkPlugins/group/kubenet/DNS 0.12
300 TestNetworkPlugins/group/kubenet/Localhost 0.11
302 TestNetworkPlugins/group/calico/ControllerPod 5.02
303 TestNetworkPlugins/group/calico/KubeletFlags 0.43
304 TestNetworkPlugins/group/calico/NetCatPod 13.21
305 TestNetworkPlugins/group/calico/DNS 0.13
306 TestNetworkPlugins/group/calico/Localhost 0.11
307 TestNetworkPlugins/group/calico/HairPin 0.11
311 TestStartStop/group/no-preload/serial/FirstStart 55.6
312 TestStartStop/group/no-preload/serial/DeployApp 9.26
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.8
314 TestStartStop/group/no-preload/serial/Stop 12.42
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.38
316 TestStartStop/group/no-preload/serial/SecondStart 300.07
319 TestStartStop/group/old-k8s-version/serial/Stop 1.59
320 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.4
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 16.02
323 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
324 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.43
325 TestStartStop/group/no-preload/serial/Pause 3.17
327 TestStartStop/group/embed-certs/serial/FirstStart 44.24
328 TestStartStop/group/embed-certs/serial/DeployApp 10.26
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.75
330 TestStartStop/group/embed-certs/serial/Stop 12.4
331 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.39
332 TestStartStop/group/embed-certs/serial/SecondStart 297.94
333 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 13.02
334 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
335 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.43
336 TestStartStop/group/embed-certs/serial/Pause 3.16
338 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 44.54
340 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.33
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.76
342 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.45
343 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.38
344 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 302.69
345 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 9.02
346 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
347 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.48
348 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.19
350 TestStartStop/group/newest-cni/serial/FirstStart 40.69
351 TestStartStop/group/newest-cni/serial/DeployApp 0
352 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
353 TestStartStop/group/newest-cni/serial/Stop 12.42
354 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.4
355 TestStartStop/group/newest-cni/serial/SecondStart 17.05
356 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
357 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
358 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.44
359 TestStartStop/group/newest-cni/serial/Pause 3.16
x
+
TestDownloadOnly/v1.16.0/json-events (13.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-100255 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-100255 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=docker --driver=docker : (13.060467004s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (13.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
--- PASS: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-100255
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-100255: exit status 85 (299.125794ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-100255 | jenkins | v1.28.0 | 09 Nov 22 10:02 PST |          |
	|         | -p download-only-100255        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/09 10:02:55
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 10:02:55.164593   22872 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:02:55.164863   22872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:02:55.164868   22872 out.go:309] Setting ErrFile to fd 2...
	I1109 10:02:55.164872   22872 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:02:55.164976   22872 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	W1109 10:02:55.165083   22872 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15331-22028/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15331-22028/.minikube/config/config.json: no such file or directory
	I1109 10:02:55.165831   22872 out.go:303] Setting JSON to true
	I1109 10:02:55.188449   22872 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10950,"bootTime":1668006025,"procs":398,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 10:02:55.188551   22872 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 10:02:55.211296   22872 out.go:97] [download-only-100255] minikube v1.28.0 on Darwin 13.0
	I1109 10:02:55.211534   22872 notify.go:220] Checking for updates...
	W1109 10:02:55.211579   22872 preload.go:295] Failed to list preload files: open /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball: no such file or directory
	I1109 10:02:55.232915   22872 out.go:169] MINIKUBE_LOCATION=15331
	I1109 10:02:55.277088   22872 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:02:55.319835   22872 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 10:02:55.341177   22872 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 10:02:55.363113   22872 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	W1109 10:02:55.406021   22872 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 10:02:55.406418   22872 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 10:02:55.469187   22872 docker.go:137] docker version: linux-20.10.20
	I1109 10:02:55.469315   22872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:02:55.611859   22872 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-11-09 18:02:55.522617005 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:02:55.633476   22872 out.go:97] Using the docker driver based on user configuration
	I1109 10:02:55.633558   22872 start.go:282] selected driver: docker
	I1109 10:02:55.633573   22872 start.go:808] validating driver "docker" against <nil>
	I1109 10:02:55.633837   22872 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:02:55.779194   22872 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-11-09 18:02:55.690848259 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:02:55.779296   22872 start_flags.go:303] no existing cluster config was found, will generate one from the flags 
	I1109 10:02:55.781968   22872 start_flags.go:384] Using suggested 5895MB memory alloc based on sys=32768MB, container=5943MB
	I1109 10:02:55.782070   22872 start_flags.go:883] Wait components to verify : map[apiserver:true system_pods:true]
	I1109 10:02:55.803482   22872 out.go:169] Using Docker Desktop driver with root privileges
	I1109 10:02:55.825545   22872 cni.go:95] Creating CNI manager for ""
	I1109 10:02:55.825580   22872 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 10:02:55.825595   22872 start_flags.go:317] config:
	{Name:download-only-100255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-100255 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:02:55.847270   22872 out.go:97] Starting control plane node download-only-100255 in cluster download-only-100255
	I1109 10:02:55.847318   22872 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 10:02:55.869224   22872 out.go:97] Pulling base image ...
	I1109 10:02:55.869305   22872 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 10:02:55.869408   22872 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 10:02:55.920254   22872 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1109 10:02:55.920276   22872 cache.go:57] Caching tarball of preloaded images
	I1109 10:02:55.920495   22872 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 10:02:55.941076   22872 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I1109 10:02:55.941135   22872 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1109 10:02:55.944425   22872 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1109 10:02:55.944643   22872 image.go:60] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1109 10:02:55.944832   22872 image.go:120] Writing gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1109 10:02:56.018079   22872 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4?checksum=md5:326f3ce331abb64565b50b8c9e791244 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4
	I1109 10:03:00.631034   22872 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1109 10:03:00.631182   22872 preload.go:256] verifying checksum of /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-docker-overlay2-amd64.tar.lz4 ...
	I1109 10:03:01.237501   22872 cache.go:60] Finished verifying existence of preloaded tar for  v1.16.0 on docker
	I1109 10:03:01.237706   22872 profile.go:148] Saving config to /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/download-only-100255/config.json ...
	I1109 10:03:01.237735   22872 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/download-only-100255/config.json: {Name:mkbfd06c07b19a6d2951f761885fb8b29b463d7e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1109 10:03:01.237997   22872 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime docker
	I1109 10:03:01.238252   22872 download.go:101] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.16.0/bin/darwin/amd64/kubectl.sha1 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/darwin/amd64/v1.16.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-100255"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/json-events (6.67s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/json-events
aaa_download_only_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-100255 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker 
aaa_download_only_test.go:71: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-100255 --force --alsologtostderr --kubernetes-version=v1.25.3 --container-runtime=docker --driver=docker : (6.667652271s)
--- PASS: TestDownloadOnly/v1.25.3/json-events (6.67s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/preload-exists
--- PASS: TestDownloadOnly/v1.25.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/kubectl
--- PASS: TestDownloadOnly/v1.25.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/LogsDuration
aaa_download_only_test.go:173: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-100255
aaa_download_only_test.go:173: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-100255: exit status 85 (288.412908ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-100255 | jenkins | v1.28.0 | 09 Nov 22 10:02 PST |          |
	|         | -p download-only-100255        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-100255 | jenkins | v1.28.0 | 09 Nov 22 10:03 PST |          |
	|         | -p download-only-100255        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.25.3   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2022/11/09 10:03:08
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.19.3 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1109 10:03:08.527623   22912 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:03:08.527866   22912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:03:08.527871   22912 out.go:309] Setting ErrFile to fd 2...
	I1109 10:03:08.527875   22912 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:03:08.527983   22912 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	W1109 10:03:08.528082   22912 root.go:311] Error reading config file at /Users/jenkins/minikube-integration/15331-22028/.minikube/config/config.json: open /Users/jenkins/minikube-integration/15331-22028/.minikube/config/config.json: no such file or directory
	I1109 10:03:08.528452   22912 out.go:303] Setting JSON to true
	I1109 10:03:08.547392   22912 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10963,"bootTime":1668006025,"procs":401,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 10:03:08.547501   22912 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 10:03:08.569989   22912 out.go:97] [download-only-100255] minikube v1.28.0 on Darwin 13.0
	I1109 10:03:08.570187   22912 notify.go:220] Checking for updates...
	I1109 10:03:08.591857   22912 out.go:169] MINIKUBE_LOCATION=15331
	I1109 10:03:08.613103   22912 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:03:08.634947   22912 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 10:03:08.657004   22912 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 10:03:08.679053   22912 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	W1109 10:03:08.721552   22912 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1109 10:03:08.722293   22912 config.go:180] Loaded profile config "download-only-100255": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.16.0
	W1109 10:03:08.722382   22912 start.go:716] api.Load failed for download-only-100255: filestore "download-only-100255": Docker machine "download-only-100255" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1109 10:03:08.722468   22912 driver.go:365] Setting default libvirt URI to qemu:///system
	W1109 10:03:08.722507   22912 start.go:716] api.Load failed for download-only-100255: filestore "download-only-100255": Docker machine "download-only-100255" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I1109 10:03:08.782157   22912 docker.go:137] docker version: linux-20.10.20
	I1109 10:03:08.782285   22912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:03:08.920752   22912 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-11-09 18:03:08.845542955 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:03:08.942472   22912 out.go:97] Using the docker driver based on existing profile
	I1109 10:03:08.942506   22912 start.go:282] selected driver: docker
	I1109 10:03:08.942517   22912 start.go:808] validating driver "docker" against &{Name:download-only-100255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-100255 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vm
net_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:03:08.942872   22912 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:03:09.083132   22912 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:43 OomKillDisable:false NGoroutines:44 SystemTime:2022-11-09 18:03:09.007948101 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:03:09.085521   22912 cni.go:95] Creating CNI manager for ""
	I1109 10:03:09.085540   22912 cni.go:169] CNI unnecessary in this configuration, recommending no CNI
	I1109 10:03:09.085555   22912 start_flags.go:317] config:
	{Name:download-only-100255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:5895 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:download-only-100255 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket:
NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:03:09.107167   22912 out.go:97] Starting control plane node download-only-100255 in cluster download-only-100255
	I1109 10:03:09.107280   22912 cache.go:120] Beginning downloading kic base image for docker with docker
	I1109 10:03:09.128181   22912 out.go:97] Pulling base image ...
	I1109 10:03:09.128328   22912 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 10:03:09.128431   22912 image.go:76] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local docker daemon
	I1109 10:03:09.178123   22912 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	I1109 10:03:09.178143   22912 cache.go:57] Caching tarball of preloaded images
	I1109 10:03:09.178360   22912 preload.go:132] Checking if preload exists for k8s version v1.25.3 and runtime docker
	I1109 10:03:09.199286   22912 out.go:97] Downloading Kubernetes v1.25.3 preload ...
	I1109 10:03:09.199370   22912 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4 ...
	I1109 10:03:09.202841   22912 cache.go:147] Downloading gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 to local cache
	I1109 10:03:09.202971   22912 image.go:60] Checking for gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory
	I1109 10:03:09.203001   22912 image.go:63] Found gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 in local cache directory, skipping pull
	I1109 10:03:09.203009   22912 image.go:104] gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 exists in cache, skipping pull
	I1109 10:03:09.203022   22912 cache.go:150] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 as a tarball
	I1109 10:03:09.297886   22912 download.go:101] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.25.3/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4?checksum=md5:624cb874287e7e3d793b79e4205a7f98 -> /Users/jenkins/minikube-integration/15331-22028/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.25.3-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-100255"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:174: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.25.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.67s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.67s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:203: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-100255
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.39s)

                                                
                                    
x
+
TestDownloadOnlyKic (10.05s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p download-docker-100316 --force --alsologtostderr --driver=docker 
aaa_download_only_test.go:228: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p download-docker-100316 --force --alsologtostderr --driver=docker : (8.950098977s)
helpers_test.go:175: Cleaning up "download-docker-100316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-docker-100316
--- PASS: TestDownloadOnlyKic (10.05s)

                                                
                                    
x
+
TestBinaryMirror (1.71s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:310: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-100326 --alsologtostderr --binary-mirror http://127.0.0.1:60652 --driver=docker 
aaa_download_only_test.go:310: (dbg) Done: out/minikube-darwin-amd64 start --download-only -p binary-mirror-100326 --alsologtostderr --binary-mirror http://127.0.0.1:60652 --driver=docker : (1.086338961s)
helpers_test.go:175: Cleaning up "binary-mirror-100326" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-100326
--- PASS: TestBinaryMirror (1.71s)

                                                
                                    
x
+
TestOffline (86.14s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-104027 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker 

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-104027 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker : (1m23.210023385s)
helpers_test.go:175: Cleaning up "offline-docker-104027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-104027

                                                
                                                
=== CONT  TestOffline
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-104027: (2.925357452s)
--- PASS: TestOffline (86.14s)

                                                
                                    
x
+
TestAddons/Setup (147.85s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-100328 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:76: (dbg) Done: out/minikube-darwin-amd64 start -p addons-100328 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --driver=docker  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m27.854064004s)
--- PASS: TestAddons/Setup (147.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:360: metrics-server stabilized in 1.969832ms
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:342: "metrics-server-769cd898cd-hn98x" [26754144-eb70-4bdc-ae6b-043a2bb4f959] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:362: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.010433916s
addons_test.go:368: (dbg) Run:  kubectl --context addons-100328 top pods -n kube-system
addons_test.go:385: (dbg) Run:  out/minikube-darwin-amd64 -p addons-100328 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.56s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:409: tiller-deploy stabilized in 2.4004ms
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:342: "tiller-deploy-696b5bfbb7-h2knm" [309d4b18-75ea-4c1a-8ddc-bcc9a0038186] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:411: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.011130605s
addons_test.go:426: (dbg) Run:  kubectl --context addons-100328 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:426: (dbg) Done: kubectl --context addons-100328 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system -- version: (7.489110455s)
addons_test.go:443: (dbg) Run:  out/minikube-darwin-amd64 -p addons-100328 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:514: csi-hostpath-driver pods stabilized in 6.604789ms
addons_test.go:517: (dbg) Run:  kubectl --context addons-100328 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:522: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100328 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:527: (dbg) Run:  kubectl --context addons-100328 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:532: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:342: "task-pv-pod" [1c2ddf6b-f6a3-4e0e-bad7-0c2e776e062a] Pending
helpers_test.go:342: "task-pv-pod" [1c2ddf6b-f6a3-4e0e-bad7-0c2e776e062a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:342: "task-pv-pod" [1c2ddf6b-f6a3-4e0e-bad7-0c2e776e062a] Running
addons_test.go:532: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 16.009983334s
addons_test.go:537: (dbg) Run:  kubectl --context addons-100328 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:542: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:417: (dbg) Run:  kubectl --context addons-100328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:417: (dbg) Run:  kubectl --context addons-100328 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:547: (dbg) Run:  kubectl --context addons-100328 delete pod task-pv-pod
addons_test.go:547: (dbg) Done: kubectl --context addons-100328 delete pod task-pv-pod: (1.017714762s)
addons_test.go:553: (dbg) Run:  kubectl --context addons-100328 delete pvc hpvc
addons_test.go:559: (dbg) Run:  kubectl --context addons-100328 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:564: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:392: (dbg) Run:  kubectl --context addons-100328 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:569: (dbg) Run:  kubectl --context addons-100328 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:574: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:342: "task-pv-pod-restore" [5e001056-2a8e-42aa-9d38-4dd13b5a1356] Pending
helpers_test.go:342: "task-pv-pod-restore" [5e001056-2a8e-42aa-9d38-4dd13b5a1356] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:342: "task-pv-pod-restore" [5e001056-2a8e-42aa-9d38-4dd13b5a1356] Running
addons_test.go:574: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 12.00765772s
addons_test.go:579: (dbg) Run:  kubectl --context addons-100328 delete pod task-pv-pod-restore
addons_test.go:583: (dbg) Run:  kubectl --context addons-100328 delete pvc hpvc-restore
addons_test.go:587: (dbg) Run:  kubectl --context addons-100328 delete volumesnapshot new-snapshot-demo
addons_test.go:591: (dbg) Run:  out/minikube-darwin-amd64 -p addons-100328 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:591: (dbg) Done: out/minikube-darwin-amd64 -p addons-100328 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.876245715s)
addons_test.go:595: (dbg) Run:  out/minikube-darwin-amd64 -p addons-100328 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (39.56s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:738: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-100328 --alsologtostderr -v=1
addons_test.go:738: (dbg) Done: out/minikube-darwin-amd64 addons enable headlamp -p addons-100328 --alsologtostderr -v=1: (1.477594344s)
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:342: "headlamp-5f4cf474d8-bbvgv" [48c5758d-94ea-49ff-93ac-8fd35bb6f1fc] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
helpers_test.go:342: "headlamp-5f4cf474d8-bbvgv" [48c5758d-94ea-49ff-93ac-8fd35bb6f1fc] Running

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:743: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.013095115s
--- PASS: TestAddons/parallel/Headlamp (12.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:342: "cloud-spanner-emulator-6c47ff8fb6-kkdb7" [84749db0-232c-456c-80e7-fc5a3d9b1b28] Running

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:759: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.009838524s
addons_test.go:762: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-100328
--- PASS: TestAddons/parallel/CloudSpanner (5.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth (15.53s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth
addons_test.go:606: (dbg) Run:  kubectl --context addons-100328 create -f testdata/busybox.yaml
addons_test.go:613: (dbg) Run:  kubectl --context addons-100328 create sa gcp-auth-test
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [5a8057c0-c9d4-4e3e-9fbb-e8b7c9d3f347] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [5a8057c0-c9d4-4e3e-9fbb-e8b7c9d3f347] Running
addons_test.go:619: (dbg) TestAddons/serial/GCPAuth: integration-test=busybox healthy within 8.007292699s
addons_test.go:625: (dbg) Run:  kubectl --context addons-100328 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:637: (dbg) Run:  kubectl --context addons-100328 describe sa gcp-auth-test
addons_test.go:651: (dbg) Run:  kubectl --context addons-100328 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:675: (dbg) Run:  kubectl --context addons-100328 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:688: (dbg) Run:  out/minikube-darwin-amd64 -p addons-100328 addons disable gcp-auth --alsologtostderr -v=1
addons_test.go:688: (dbg) Done: out/minikube-darwin-amd64 -p addons-100328 addons disable gcp-auth --alsologtostderr -v=1: (6.587871404s)
--- PASS: TestAddons/serial/GCPAuth (15.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:135: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-100328
addons_test.go:135: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-100328: (12.482114378s)
addons_test.go:139: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-100328
addons_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-100328
--- PASS: TestAddons/StoppedEnableDisable (12.93s)

                                                
                                    
x
+
TestCertOptions (30.86s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-104226 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-104226 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --apiserver-name=localhost: (27.380505186s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-104226 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-104226 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-104226" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-104226
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-104226: (2.591777003s)
--- PASS: TestCertOptions (30.86s)

                                                
                                    
x
+
TestCertExpiration (237.04s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-104155 --memory=2048 --cert-expiration=3m --driver=docker 

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-104155 --memory=2048 --cert-expiration=3m --driver=docker : (27.985976867s)

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-104155 --memory=2048 --cert-expiration=8760h --driver=docker 
E1109 10:45:42.895753   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-104155 --memory=2048 --cert-expiration=8760h --driver=docker : (26.446033877s)
helpers_test.go:175: Cleaning up "cert-expiration-104155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-104155
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-104155: (2.603375937s)
--- PASS: TestCertExpiration (237.04s)

                                                
                                    
x
+
TestDockerFlags (33.68s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-104153 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker 

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-104153 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker : (30.268412872s)
docker_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-104153 ssh "sudo systemctl show docker --property=Environment --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:61: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-104153 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-104153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-104153
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-104153: (2.596733206s)
--- PASS: TestDockerFlags (33.68s)

                                                
                                    
x
+
TestForceSystemdFlag (31.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-104124 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker 
E1109 10:41:45.167469   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-104124 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker : (28.168769667s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-104124 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-104124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-104124

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-104124: (2.749864312s)
--- PASS: TestForceSystemdFlag (31.40s)

                                                
                                    
x
+
TestForceSystemdEnv (30.49s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-104053 --memory=2048 --alsologtostderr -v=5 --driver=docker 
E1109 10:40:56.422068   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
docker_test.go:149: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-104053 --memory=2048 --alsologtostderr -v=5 --driver=docker : (27.408819185s)
docker_test.go:104: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-104053 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-104053" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-104053
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-104053: (2.617186798s)
--- PASS: TestForceSystemdEnv (30.49s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.77s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.77s)

                                                
                                    
x
+
TestErrorSpam/setup (27.47s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-100737 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 --driver=docker 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-100737 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 --driver=docker : (27.473700069s)
--- PASS: TestErrorSpam/setup (27.47s)

                                                
                                    
x
+
TestErrorSpam/start (2.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 start --dry-run
--- PASS: TestErrorSpam/start (2.38s)

                                                
                                    
x
+
TestErrorSpam/status (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 status
--- PASS: TestErrorSpam/status (1.23s)

                                                
                                    
x
+
TestErrorSpam/pause (1.79s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 pause
--- PASS: TestErrorSpam/pause (1.79s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.91s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 unpause
--- PASS: TestErrorSpam/unpause (1.91s)

                                                
                                    
x
+
TestErrorSpam/stop (13.06s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 stop: (12.411100403s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-100737 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-100737 stop
--- PASS: TestErrorSpam/stop (13.06s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1782: local sync path: /Users/jenkins/minikube-integration/15331-22028/.minikube/files/etc/test/nested/copy/22868/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (90.99s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2161: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-100827 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker 
functional_test.go:2161: (dbg) Done: out/minikube-darwin-amd64 start -p functional-100827 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker : (1m30.990325316s)
--- PASS: TestFunctional/serial/StartWithProxy (90.99s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (36.82s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:652: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-100827 --alsologtostderr -v=8
functional_test.go:652: (dbg) Done: out/minikube-darwin-amd64 start -p functional-100827 --alsologtostderr -v=8: (36.821636674s)
functional_test.go:656: soft start took 36.822094924s for "functional-100827" cluster.
--- PASS: TestFunctional/serial/SoftStart (36.82s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:674: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:689: (dbg) Run:  kubectl --context functional-100827 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (5.89s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 cache add k8s.gcr.io/pause:3.1
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 cache add k8s.gcr.io/pause:3.1: (2.066232099s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 cache add k8s.gcr.io/pause:3.3
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 cache add k8s.gcr.io/pause:3.3: (1.945848713s)
functional_test.go:1042: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 cache add k8s.gcr.io/pause:latest
functional_test.go:1042: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 cache add k8s.gcr.io/pause:latest: (1.875628344s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (5.89s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.81s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1070: (dbg) Run:  docker build -t minikube-local-cache-test:functional-100827 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1313350372/001
functional_test.go:1082: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 cache add minikube-local-cache-test:functional-100827
functional_test.go:1082: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 cache add minikube-local-cache-test:functional-100827: (1.285618358s)
functional_test.go:1087: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 cache delete minikube-local-cache-test:functional-100827
functional_test.go:1076: (dbg) Run:  docker rmi minikube-local-cache-test:functional-100827
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.81s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:1095: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1103: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1117: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1140: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1146: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-100827 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (393.75478ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1151: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 cache reload
functional_test.go:1151: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 cache reload: (1.165627304s)
functional_test.go:1156: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1165: (dbg) Run:  out/minikube-darwin-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:709: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 kubectl -- --context functional-100827 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.51s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:734: (dbg) Run:  out/kubectl --context functional-100827 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.65s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (44.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:750: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-100827 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1109 10:10:56.526930   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:10:56.533395   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:10:56.544350   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:10:56.565580   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:10:56.607447   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:10:56.689670   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:10:56.851907   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:10:57.174130   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:10:57.814397   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:10:59.096701   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:11:01.659089   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:11:06.779644   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:11:17.020597   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
functional_test.go:750: (dbg) Done: out/minikube-darwin-amd64 start -p functional-100827 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (44.103469829s)
functional_test.go:754: restart took 44.103600791s for "functional-100827" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (44.10s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:803: (dbg) Run:  kubectl --context functional-100827 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:818: etcd phase: Running
functional_test.go:828: etcd status: Ready
functional_test.go:818: kube-apiserver phase: Running
functional_test.go:828: kube-apiserver status: Ready
functional_test.go:818: kube-controller-manager phase: Running
functional_test.go:828: kube-controller-manager status: Ready
functional_test.go:818: kube-scheduler phase: Running
functional_test.go:828: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1229: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 logs
functional_test.go:1229: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 logs: (3.14330373s)
--- PASS: TestFunctional/serial/LogsCmd (3.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (3.05s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3160157965/001/logs.txt
E1109 10:11:37.502960   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
functional_test.go:1243: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd3160157965/001/logs.txt: (3.050360913s)
--- PASS: TestFunctional/serial/LogsFileCmd (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 config get cpus
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-100827 config get cpus: exit status 14 (81.860234ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 config set cpus 2
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 config get cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 config unset cpus
functional_test.go:1192: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1192: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-100827 config get cpus: exit status 14 (58.16316ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:898: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-100827 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:903: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-100827 --alsologtostderr -v=1] ...
helpers_test.go:506: unable to kill pid 25384: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.22s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:967: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-100827 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:967: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-100827 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (665.990047ms)

                                                
                                                
-- stdout --
	* [functional-100827] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 10:12:45.997723   25318 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:12:45.997911   25318 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:12:45.997916   25318 out.go:309] Setting ErrFile to fd 2...
	I1109 10:12:45.997924   25318 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:12:45.998034   25318 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:12:45.998518   25318 out.go:303] Setting JSON to false
	I1109 10:12:46.017683   25318 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11541,"bootTime":1668006025,"procs":384,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 10:12:46.017783   25318 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 10:12:46.039501   25318 out.go:177] * [functional-100827] minikube v1.28.0 on Darwin 13.0
	I1109 10:12:46.081138   25318 notify.go:220] Checking for updates...
	I1109 10:12:46.102390   25318 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 10:12:46.123404   25318 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:12:46.144494   25318 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 10:12:46.166602   25318 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 10:12:46.188640   25318 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 10:12:46.210809   25318 config.go:180] Loaded profile config "functional-100827": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:12:46.211381   25318 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 10:12:46.274229   25318 docker.go:137] docker version: linux-20.10.20
	I1109 10:12:46.274383   25318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:12:46.416522   25318 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-09 18:12:46.345018774 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServ
erAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=defaul
t name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/lo
cal/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:12:46.459108   25318 out.go:177] * Using the docker driver based on existing profile
	I1109 10:12:46.480176   25318 start.go:282] selected driver: docker
	I1109 10:12:46.480204   25318 start.go:808] validating driver "docker" against &{Name:functional-100827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-100827 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:12:46.480360   25318 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 10:12:46.505105   25318 out.go:177] 
	W1109 10:12:46.526175   25318 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1109 10:12:46.547025   25318 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:984: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-100827 --dry-run --alsologtostderr -v=1 --driver=docker 
--- PASS: TestFunctional/parallel/DryRun (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1013: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-100827 --dry-run --memory 250MB --alsologtostderr --driver=docker 
functional_test.go:1013: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-100827 --dry-run --memory 250MB --alsologtostderr --driver=docker : exit status 23 (713.665175ms)

                                                
                                                
-- stdout --
	* [functional-100827] minikube v1.28.0 sur Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 10:12:47.473228   25356 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:12:47.473406   25356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:12:47.473411   25356 out.go:309] Setting ErrFile to fd 2...
	I1109 10:12:47.473415   25356 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:12:47.473545   25356 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:12:47.474012   25356 out.go:303] Setting JSON to false
	I1109 10:12:47.493149   25356 start.go:116] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":11542,"bootTime":1668006025,"procs":384,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"13.0","kernelVersion":"22.1.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W1109 10:12:47.493253   25356 start.go:124] gopshost.Virtualization returned error: not implemented yet
	I1109 10:12:47.514477   25356 out.go:177] * [functional-100827] minikube v1.28.0 sur Darwin 13.0
	I1109 10:12:47.572342   25356 notify.go:220] Checking for updates...
	I1109 10:12:47.609553   25356 out.go:177]   - MINIKUBE_LOCATION=15331
	I1109 10:12:47.667293   25356 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	I1109 10:12:47.688634   25356 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I1109 10:12:47.709561   25356 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1109 10:12:47.730508   25356 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	I1109 10:12:47.754246   25356 config.go:180] Loaded profile config "functional-100827": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:12:47.754930   25356 driver.go:365] Setting default libvirt URI to qemu:///system
	I1109 10:12:47.817582   25356 docker.go:137] docker version: linux-20.10.20
	I1109 10:12:47.817759   25356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1109 10:12:47.960369   25356 info.go:266] docker info: {ID:DDH7:AERQ:YR2O:D5QA:GJP7:5WGB:4FTA:645H:2EO7:4YBT:LF5P:55H2 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:60 OomKillDisable:false NGoroutines:52 SystemTime:2022-11-09 18:12:47.86753117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:3 KernelVersion:5.15.49-linuxkit OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServe
rAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:6 MemTotal:6231719936 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.20 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6 Expected:9cd3357b7fd7218e4aec3eae239db1f68a5a6ec6} RuncCommit:{ID:v1.1.4-0-g5fd4c4d Expected:v1.1.4-0-g5fd4c4d} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default
name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/local/lib/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.9.1] map[Name:compose Path:/usr/local/lib/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.12.1] map[Name:dev Path:/usr/local/lib/docker/cli-plugins/docker-dev SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.0.3] map[Name:extension Path:/usr/local/lib/docker/cli-plugins/docker-extension SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.13] map[Name:sbom Path:/usr/local/lib/docker/cli-plugins/docker-sbom SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scan Path:/usr/loc
al/lib/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.21.0]] Warnings:<nil>}}
	I1109 10:12:48.002490   25356 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1109 10:12:48.023554   25356 start.go:282] selected driver: docker
	I1109 10:12:48.023610   25356 start.go:808] validating driver "docker" against &{Name:functional-100827 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:functional-100827 Namespace:default APIServerName:minikubeCA APIServerNames:[] A
PIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regist
ry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
	I1109 10:12:48.023771   25356 start.go:819] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1109 10:12:48.048581   25356 out.go:177] 
	W1109 10:12:48.069655   25356 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1109 10:12:48.091752   25356 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:847: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 status
functional_test.go:853: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:865: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (19.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1433: (dbg) Run:  kubectl --context functional-100827 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1439: (dbg) Run:  kubectl --context functional-100827 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:342: "hello-node-5fcdfb5cc4-cvcnc" [179d8e4f-bec3-4e16-a11d-7f1bc4843c27] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:342: "hello-node-5fcdfb5cc4-cvcnc" [179d8e4f-bec3-4e16-a11d-7f1bc4843c27] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1444: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 13.009394439s
functional_test.go:1449: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 service list
functional_test.go:1463: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 service --namespace=default --https --url hello-node
E1109 10:12:18.465192   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
functional_test.go:1463: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 service --namespace=default --https --url hello-node: (2.028804938s)
functional_test.go:1476: found endpoint: https://127.0.0.1:61492
functional_test.go:1491: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 service hello-node --url --format={{.IP}}
functional_test.go:1491: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 service hello-node --url --format={{.IP}}: (2.029062865s)
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1505: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 service hello-node --url: (2.028151164s)
functional_test.go:1511: found endpoint for hello-node: http://127.0.0.1:61508
--- PASS: TestFunctional/parallel/ServiceCmd (19.85s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1620: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 addons list
functional_test.go:1632: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:342: "storage-provisioner" [011b1884-d230-4218-8308-62135753ea78] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.010920605s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-100827 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-100827 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-100827 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-100827 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [66192d13-4c43-49ba-946a-ed16332b843c] Pending
helpers_test.go:342: "sp-pod" [66192d13-4c43-49ba-946a-ed16332b843c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [66192d13-4c43-49ba-946a-ed16332b843c] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 15.009438807s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-100827 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-100827 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-100827 delete -f testdata/storage-provisioner/pod.yaml: (1.241516725s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-100827 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:342: "sp-pod" [5806b62e-6b38-4876-b814-9a1718f6ebe6] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [5806b62e-6b38-4876-b814-9a1718f6ebe6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:342: "sp-pod" [5806b62e-6b38-4876-b814-9a1718f6ebe6] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.010769833s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-100827 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.91s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1655: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "echo hello"
functional_test.go:1672: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh -n functional-100827 "sudo cat /home/docker/cp-test.txt"

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 cp functional-100827:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd4211153804/001/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh -n functional-100827 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1720: (dbg) Run:  kubectl --context functional-100827 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-gjc5m" [f1f5b1b2-3a2a-4077-845f-c489ddeaa561] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:342: "mysql-596b7fcdbf-gjc5m" [f1f5b1b2-3a2a-4077-845f-c489ddeaa561] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1726: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.014344066s
functional_test.go:1734: (dbg) Run:  kubectl --context functional-100827 exec mysql-596b7fcdbf-gjc5m -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.33s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1856: Checking for existence of /etc/test/nested/copy/22868/hosts within VM
functional_test.go:1858: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "sudo cat /etc/test/nested/copy/22868/hosts"
functional_test.go:1863: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /etc/ssl/certs/22868.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "sudo cat /etc/ssl/certs/22868.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1899: Checking for existence of /usr/share/ca-certificates/22868.pem within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "sudo cat /usr/share/ca-certificates/22868.pem"
functional_test.go:1899: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1900: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/228682.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "sudo cat /etc/ssl/certs/228682.pem"
functional_test.go:1926: Checking for existence of /usr/share/ca-certificates/228682.pem within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "sudo cat /usr/share/ca-certificates/228682.pem"
functional_test.go:1926: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:215: (dbg) Run:  kubectl --context functional-100827 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1954: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-100827 ssh "sudo systemctl is-active crio": exit status 1 (513.349086ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2215: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2183: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2197: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image ls --format short
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-100827 image ls --format short:
registry.k8s.io/pause:3.8
registry.k8s.io/kube-scheduler:v1.25.3
registry.k8s.io/kube-proxy:v1.25.3
registry.k8s.io/kube-controller-manager:v1.25.3
registry.k8s.io/kube-apiserver:v1.25.3
registry.k8s.io/etcd:3.5.4-0
registry.k8s.io/coredns/coredns:v1.9.3
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.6
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.1
k8s.gcr.io/echoserver:1.8
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-100827
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-100827
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image ls --format table
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-100827 image ls --format table:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/kube-controller-manager     | v1.25.3           | 6039992312758 | 117MB  |
| registry.k8s.io/kube-proxy                  | v1.25.3           | beaaf00edd38a | 61.7MB |
| k8s.gcr.io/pause                            | 3.6               | 6270bb605e12e | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| k8s.gcr.io/echoserver                       | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/mysql                     | 5.7               | eef0fab001e8d | 495MB  |
| registry.k8s.io/coredns/coredns             | v1.9.3            | 5185b96f0becf | 48.8MB |
| k8s.gcr.io/pause                            | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | 76c69feac34e8 | 142MB  |
| docker.io/library/nginx                     | alpine            | b997307a58ab5 | 23.6MB |
| registry.k8s.io/kube-apiserver              | v1.25.3           | 0346dbd74bcb9 | 128MB  |
| registry.k8s.io/kube-scheduler              | v1.25.3           | 6d23ec0e8b87e | 50.6MB |
| registry.k8s.io/pause                       | 3.8               | 4873874c08efc | 711kB  |
| docker.io/library/minikube-local-cache-test | functional-100827 | b232add0d7cf3 | 30B    |
| gcr.io/google-containers/addon-resizer      | functional-100827 | ffd4cfbbe753e | 32.9MB |
| k8s.gcr.io/pause                            | 3.3               | 0184c1613d929 | 683kB  |
| k8s.gcr.io/pause                            | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/etcd                        | 3.5.4-0           | a8a176a5d5d69 | 300MB  |
|---------------------------------------------|-------------------|---------------|--------|
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image ls --format json
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-100827 image ls --format json:
[{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["k8s.gcr.io/echoserver:1.8"],"size":"95400000"},{"id":"6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.25.3"],"size":"50600000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-100827"],"size":"32900000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"b232add0d7cf3b320748360f36dffa6b515a6c8c6a6da7d3c101333891802722","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-100827"],"size":"30"},{"id":"5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.9.3"],"size":"48800000"},{"id":"56cc
512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.25.3"],"size":"117000000"},{"id":"4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.8"],"size":"711000"},{"id":"b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"23600000"},{"id":"0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.25.3"],"size":"128000000"},{"id":"beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.25.3"],"size":"61700000"},{"id":"a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66"
,"repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.4-0"],"size":"300000000"},{"id":"6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.6"],"size":"683000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.3"],"size":"683000"},{"id":"eef0fab001e8dea739d538688b09e162bf54dd6c2bc04066bff99b5335cd6223","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"495000000"},{"id":"76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"142000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["k8s.gcr.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["k8s.gcr.io/pause:latest"],"size":"240000"}]
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image ls --format yaml
functional_test.go:262: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-100827 image ls --format yaml:
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- k8s.gcr.io/pause:latest
size: "240000"
- id: 4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.8
size: "711000"
- id: 6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.6
size: "683000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- k8s.gcr.io/pause:3.1
size: "742000"
- id: 76c69feac34e85768b284f84416c3546b240e8cb4f68acbbe5ad261a8b36f39f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "142000000"
- id: 0346dbd74bcb9485bb4da1b33027094d79488470d8d1b9baa4d927db564e4fe0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.25.3
size: "128000000"
- id: a8a176a5d5d698f9409dc246f81fa69d37d4a2f4132ba5e62e72a78476b27f66
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.4-0
size: "300000000"
- id: 5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.9.3
size: "48800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-100827
size: "32900000"
- id: b232add0d7cf3b320748360f36dffa6b515a6c8c6a6da7d3c101333891802722
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-100827
size: "30"
- id: eef0fab001e8dea739d538688b09e162bf54dd6c2bc04066bff99b5335cd6223
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "495000000"
- id: b997307a58ab5b542359e567c9f77bb2a7cc3da1432baf6de2b3ae3e7b872070
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "23600000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- k8s.gcr.io/echoserver:1.8
size: "95400000"
- id: 6d23ec0e8b87eaaa698c3425c2c4d25f7329c587e9b39d967ab3f60048983912
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.25.3
size: "50600000"
- id: 60399923127581086e9029f30a0c9e3c88708efa8fc05d22d5e33887e7c0310a
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.25.3
size: "117000000"
- id: beaaf00edd38a6cb405376588e708084376a6786e722231dc8a1482730e0c041
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.25.3
size: "61700000"

                                                
                                                
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh pgrep buildkitd
functional_test.go:304: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-100827 ssh pgrep buildkitd: exit status 1 (427.617107ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image build -t localhost/my-image:functional-100827 testdata/build
functional_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 image build -t localhost/my-image:functional-100827 testdata/build: (3.11831937s)
functional_test.go:316: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-100827 image build -t localhost/my-image:functional-100827 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in cab3938865ab
Removing intermediate container cab3938865ab
---> 1c6746b9b319
Step 3/3 : ADD content.txt /
---> db53e7777c1a
Successfully built db53e7777c1a
Successfully tagged localhost/my-image:functional-100827
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image ls
2022/11/09 10:13:05 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/Setup
functional_test.go:338: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.449365426s)
functional_test.go:343: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-100827
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.51s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:492: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-100827 docker-env) && out/minikube-darwin-amd64 status -p functional-100827"
functional_test.go:492: (dbg) Done: /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-100827 docker-env) && out/minikube-darwin-amd64 status -p functional-100827": (1.155253468s)
functional_test.go:515: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-100827 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2046: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image load --daemon gcr.io/google-containers/addon-resizer:functional-100827

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:351: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 image load --daemon gcr.io/google-containers/addon-resizer:functional-100827: (2.995245645s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image load --daemon gcr.io/google-containers/addon-resizer:functional-100827

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:361: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 image load --daemon gcr.io/google-containers/addon-resizer:functional-100827: (2.033533402s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:231: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:231: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.387434985s)
functional_test.go:236: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-100827
functional_test.go:241: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image load --daemon gcr.io/google-containers/addon-resizer:functional-100827
functional_test.go:241: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 image load --daemon gcr.io/google-containers/addon-resizer:functional-100827: (4.191115791s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image save gcr.io/google-containers/addon-resizer:functional-100827 /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:376: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 image save gcr.io/google-containers/addon-resizer:functional-100827 /Users/jenkins/workspace/addon-resizer-save.tar: (1.701950775s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image rm gcr.io/google-containers/addon-resizer:functional-100827
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:405: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image load /Users/jenkins/workspace/addon-resizer-save.tar
functional_test.go:405: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 image load /Users/jenkins/workspace/addon-resizer-save.tar: (1.830347055s)
functional_test.go:444: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (2.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:415: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-100827
functional_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 image save --daemon gcr.io/google-containers/addon-resizer:functional-100827

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p functional-100827 image save --daemon gcr.io/google-containers/addon-resizer:functional-100827: (2.893289877s)
functional_test.go:425: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-100827
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (3.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:127: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-100827 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:147: (dbg) Run:  kubectl --context functional-100827 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:342: "nginx-svc" [e971628a-1c4f-487c-a52c-df8708d898b7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
helpers_test.go:342: "nginx-svc" [e971628a-1c4f-487c-a52c-df8708d898b7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:151: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.008062086s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:169: (dbg) Run:  kubectl --context functional-100827 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:234: tunnel at http://127.0.0.1 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:369: (dbg) stopping [out/minikube-darwin-amd64 -p functional-100827 tunnel --alsologtostderr] ...
helpers_test.go:500: unable to terminate pid 24990: operation not permitted
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "442.069407ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "81.971769ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "429.115674ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "80.046519ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:66: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-100827 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3321695032/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:100: wrote "test-1668017554789545000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3321695032/001/created-by-test
functional_test_mount_test.go:100: wrote "test-1668017554789545000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3321695032/001/created-by-test-removed-by-pod
functional_test_mount_test.go:100: wrote "test-1668017554789545000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3321695032/001/test-1668017554789545000
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:108: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-100827 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (390.180962ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh -- ls -la /mount-9p
functional_test_mount_test.go:126: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  9 18:12 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  9 18:12 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  9 18:12 test-1668017554789545000
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh cat /mount-9p/test-1668017554789545000
functional_test_mount_test.go:141: (dbg) Run:  kubectl --context functional-100827 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:342: "busybox-mount" [0b583e6b-7fdb-4d4c-b3fa-451ebbe749ed] Pending
helpers_test.go:342: "busybox-mount" [0b583e6b-7fdb-4d4c-b3fa-451ebbe749ed] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:342: "busybox-mount" [0b583e6b-7fdb-4d4c-b3fa-451ebbe749ed] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
helpers_test.go:342: "busybox-mount" [0b583e6b-7fdb-4d4c-b3fa-451ebbe749ed] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:146: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007911396s
functional_test_mount_test.go:162: (dbg) Run:  kubectl --context functional-100827 logs busybox-mount
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "sudo umount -f /mount-9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:87: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-100827 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port3321695032/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.90s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:206: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-100827 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port292475676/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-100827 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (395.502103ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:250: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh -- ls -la /mount-9p
functional_test_mount_test.go:254: guest mount directory contents
total 0
functional_test_mount_test.go:256: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-100827 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port292475676/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:257: reading mount text
functional_test_mount_test.go:271: done reading mount text
functional_test_mount_test.go:223: (dbg) Run:  out/minikube-darwin-amd64 -p functional-100827 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:223: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-100827 ssh "sudo umount -f /mount-9p": exit status 1 (370.38123ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:225: "out/minikube-darwin-amd64 -p functional-100827 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:227: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-100827 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port292475676/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.26s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:186: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-100827
--- PASS: TestFunctional/delete_addon-resizer_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:194: (dbg) Run:  docker rmi -f localhost/my-image:functional-100827
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:202: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-100827
--- PASS: TestFunctional/delete_minikube_cached_images (0.06s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.94s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-102025 --output=json --user=testUser --memory=2200 --wait=true --driver=docker 
E1109 10:20:56.528019   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-102025 --output=json --user=testUser --memory=2200 --wait=true --driver=docker : (1m19.938859035s)
--- PASS: TestJSONOutput/start/Command (79.94s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-102025 --output=json --user=testUser
E1109 10:21:45.275103   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.58s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-102025 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.58s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.29s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-102025 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-102025 --output=json --user=testUser: (12.291859016s)
--- PASS: TestJSONOutput/stop/Command (12.29s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.75s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:149: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-102200 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:149: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-102200 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (337.349096ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b87f72ad-81e5-4036-b6df-ea51fb3c5482","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-102200] minikube v1.28.0 on Darwin 13.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f901718b-7ed2-48cd-95ad-fa1a7d67abdf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15331"}}
	{"specversion":"1.0","id":"47dcb32a-6a63-4933-a2be-611475f8f28b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig"}}
	{"specversion":"1.0","id":"0276e543-f989-4754-b8f9-b973226d6d18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"8302416b-5d37-4b57-859b-2eef94239ff9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a2e65272-c38a-4785-a1e1-c493ae527e21","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube"}}
	{"specversion":"1.0","id":"091581b2-f36b-42d8-bbcb-9cbdf061bd3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-102200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-102200
--- PASS: TestErrorJSONOutput (0.75s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-102201 --network=
E1109 10:22:12.976057   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-102201 --network=: (28.105896953s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-102201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-102201
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-102201: (2.576472682s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.74s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (29.76s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-network-102232 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-darwin-amd64 start -p docker-network-102232 --network=bridge: (27.303363142s)
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-102232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-network-102232
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-network-102232: (2.397518751s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (29.76s)

                                                
                                    
x
+
TestKicExistingNetwork (29.54s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:122: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-darwin-amd64 start -p existing-network-102302 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-darwin-amd64 start -p existing-network-102302 --network=existing-network: (26.759875566s)
helpers_test.go:175: Cleaning up "existing-network-102302" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p existing-network-102302
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p existing-network-102302: (2.416445107s)
--- PASS: TestKicExistingNetwork (29.54s)

                                                
                                    
x
+
TestKicCustomSubnet (28.67s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-subnet-102331 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-subnet-102331 --subnet=192.168.60.0/24: (26.015122068s)
kic_custom_network_test.go:133: (dbg) Run:  docker network inspect custom-subnet-102331 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-102331" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-subnet-102331
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p custom-subnet-102331: (2.594034712s)
--- PASS: TestKicCustomSubnet (28.67s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (60.71s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-102400 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-102400 --driver=docker : (27.04913457s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-102400 --driver=docker 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-102400 --driver=docker : (26.67967414s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-102400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-102400
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-102400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-102400
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-102400: (2.565793129s)
helpers_test.go:175: Cleaning up "first-102400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-102400
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-102400: (2.613342303s)
--- PASS: TestMinikubeProfile (60.71s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.2s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-102501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-102501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker : (6.196265408s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.20s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-102501 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-102501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-102501 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker : (6.184504556s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-102501 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.13s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-102501 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-102501 --alsologtostderr -v=5: (2.131952197s)
--- PASS: TestMountStart/serial/DeleteFirst (2.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-102501 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-102501
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-102501: (1.593088012s)
--- PASS: TestMountStart/serial/Stop (1.59s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (5.4s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-102501
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-102501: (4.39897711s)
--- PASS: TestMountStart/serial/RestartStopped (5.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-102501 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (83.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-102528 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker 
E1109 10:25:56.446084   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:26:45.192445   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
multinode_test.go:83: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-102528 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker : (1m22.37703827s)
multinode_test.go:89: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (83.07s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- rollout status deployment/busybox
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-102528 -- rollout status deployment/busybox: (3.776458987s)
multinode_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:502: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- exec busybox-65db55d5d6-cx4lf -- nslookup kubernetes.io
multinode_test.go:510: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- exec busybox-65db55d5d6-lbxzv -- nslookup kubernetes.io
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- exec busybox-65db55d5d6-cx4lf -- nslookup kubernetes.default
multinode_test.go:520: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- exec busybox-65db55d5d6-lbxzv -- nslookup kubernetes.default
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- exec busybox-65db55d5d6-cx4lf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- exec busybox-65db55d5d6-lbxzv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.60s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:538: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- exec busybox-65db55d5d6-cx4lf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- exec busybox-65db55d5d6-cx4lf -- sh -c "ping -c 1 192.168.65.2"
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- exec busybox-65db55d5d6-lbxzv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-102528 -- exec busybox-65db55d5d6-lbxzv -- sh -c "ping -c 1 192.168.65.2"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.90s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (24.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:108: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-102528 -v 3 --alsologtostderr
E1109 10:27:19.502386   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
multinode_test.go:108: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-102528 -v 3 --alsologtostderr: (23.925897208s)
multinode_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (24.92s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.43s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (14.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 status --output json --alsologtostderr
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp testdata/cp-test.txt multinode-102528:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp multinode-102528:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3385420501/001/cp-test_multinode-102528.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp multinode-102528:/home/docker/cp-test.txt multinode-102528-m02:/home/docker/cp-test_multinode-102528_multinode-102528-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m02 "sudo cat /home/docker/cp-test_multinode-102528_multinode-102528-m02.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp multinode-102528:/home/docker/cp-test.txt multinode-102528-m03:/home/docker/cp-test_multinode-102528_multinode-102528-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m03 "sudo cat /home/docker/cp-test_multinode-102528_multinode-102528-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp testdata/cp-test.txt multinode-102528-m02:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp multinode-102528-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3385420501/001/cp-test_multinode-102528-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp multinode-102528-m02:/home/docker/cp-test.txt multinode-102528:/home/docker/cp-test_multinode-102528-m02_multinode-102528.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528 "sudo cat /home/docker/cp-test_multinode-102528-m02_multinode-102528.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp multinode-102528-m02:/home/docker/cp-test.txt multinode-102528-m03:/home/docker/cp-test_multinode-102528-m02_multinode-102528-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m03 "sudo cat /home/docker/cp-test_multinode-102528-m02_multinode-102528-m03.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp testdata/cp-test.txt multinode-102528-m03:/home/docker/cp-test.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp multinode-102528-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3385420501/001/cp-test_multinode-102528-m03.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp multinode-102528-m03:/home/docker/cp-test.txt multinode-102528:/home/docker/cp-test_multinode-102528-m03_multinode-102528.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528 "sudo cat /home/docker/cp-test_multinode-102528-m03_multinode-102528.txt"
helpers_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 cp multinode-102528-m03:/home/docker/cp-test.txt multinode-102528-m02:/home/docker/cp-test_multinode-102528-m03_multinode-102528-m02.txt
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:532: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 ssh -n multinode-102528-m02 "sudo cat /home/docker/cp-test_multinode-102528-m03_multinode-102528-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (14.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (13.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 node stop m03
multinode_test.go:208: (dbg) Done: out/minikube-darwin-amd64 -p multinode-102528 node stop m03: (12.272279244s)
multinode_test.go:214: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 status
multinode_test.go:214: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-102528 status: exit status 7 (736.328111ms)

                                                
                                                
-- stdout --
	multinode-102528
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-102528-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-102528-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:221: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 status --alsologtostderr
multinode_test.go:221: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-102528 status --alsologtostderr: exit status 7 (738.165779ms)

                                                
                                                
-- stdout --
	multinode-102528
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-102528-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-102528-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 10:27:51.201625   28676 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:27:51.201791   28676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:27:51.201796   28676 out.go:309] Setting ErrFile to fd 2...
	I1109 10:27:51.201799   28676 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:27:51.201904   28676 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:27:51.202104   28676 out.go:303] Setting JSON to false
	I1109 10:27:51.202130   28676 mustload.go:65] Loading cluster: multinode-102528
	I1109 10:27:51.202170   28676 notify.go:220] Checking for updates...
	I1109 10:27:51.202444   28676 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:27:51.202458   28676 status.go:255] checking status of multinode-102528 ...
	I1109 10:27:51.202895   28676 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:27:51.259390   28676 status.go:330] multinode-102528 host status = "Running" (err=<nil>)
	I1109 10:27:51.259421   28676 host.go:66] Checking if "multinode-102528" exists ...
	I1109 10:27:51.259671   28676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528
	I1109 10:27:51.316798   28676 host.go:66] Checking if "multinode-102528" exists ...
	I1109 10:27:51.317100   28676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 10:27:51.317179   28676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:27:51.375690   28676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62203 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528/id_rsa Username:docker}
	I1109 10:27:51.459493   28676 ssh_runner.go:195] Run: systemctl --version
	I1109 10:27:51.463951   28676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 10:27:51.475401   28676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-102528
	I1109 10:27:51.532470   28676 kubeconfig.go:92] found "multinode-102528" server: "https://127.0.0.1:62202"
	I1109 10:27:51.532496   28676 api_server.go:165] Checking apiserver status ...
	I1109 10:27:51.532552   28676 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1109 10:27:51.542718   28676 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1626/cgroup
	W1109 10:27:51.551040   28676 api_server.go:176] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1626/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1109 10:27:51.551108   28676 ssh_runner.go:195] Run: ls
	I1109 10:27:51.554598   28676 api_server.go:252] Checking apiserver healthz at https://127.0.0.1:62202/healthz ...
	I1109 10:27:51.559954   28676 api_server.go:278] https://127.0.0.1:62202/healthz returned 200:
	ok
	I1109 10:27:51.559966   28676 status.go:421] multinode-102528 apiserver status = Running (err=<nil>)
	I1109 10:27:51.559977   28676 status.go:257] multinode-102528 status: &{Name:multinode-102528 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 10:27:51.559991   28676 status.go:255] checking status of multinode-102528-m02 ...
	I1109 10:27:51.560257   28676 cli_runner.go:164] Run: docker container inspect multinode-102528-m02 --format={{.State.Status}}
	I1109 10:27:51.617526   28676 status.go:330] multinode-102528-m02 host status = "Running" (err=<nil>)
	I1109 10:27:51.617549   28676 host.go:66] Checking if "multinode-102528-m02" exists ...
	I1109 10:27:51.617815   28676 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-102528-m02
	I1109 10:27:51.674481   28676 host.go:66] Checking if "multinode-102528-m02" exists ...
	I1109 10:27:51.674801   28676 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1109 10:27:51.674866   28676 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-102528-m02
	I1109 10:27:51.731510   28676 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62261 SSHKeyPath:/Users/jenkins/minikube-integration/15331-22028/.minikube/machines/multinode-102528-m02/id_rsa Username:docker}
	I1109 10:27:51.816738   28676 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1109 10:27:51.826064   28676 status.go:257] multinode-102528-m02 status: &{Name:multinode-102528-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1109 10:27:51.826080   28676 status.go:255] checking status of multinode-102528-m03 ...
	I1109 10:27:51.826369   28676 cli_runner.go:164] Run: docker container inspect multinode-102528-m03 --format={{.State.Status}}
	I1109 10:27:51.882620   28676 status.go:330] multinode-102528-m03 host status = "Stopped" (err=<nil>)
	I1109 10:27:51.882643   28676 status.go:343] host is not running, skipping remaining checks
	I1109 10:27:51.882652   28676 status.go:257] multinode-102528-m03 status: &{Name:multinode-102528-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (13.75s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (19.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:242: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:252: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 node start m03 --alsologtostderr
multinode_test.go:252: (dbg) Done: out/minikube-darwin-amd64 -p multinode-102528 node start m03 --alsologtostderr: (18.138316623s)
multinode_test.go:259: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 status
multinode_test.go:273: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (19.21s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (115.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-102528
multinode_test.go:288: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-102528
multinode_test.go:288: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-102528: (37.731809531s)
multinode_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-102528 --wait=true -v=8 --alsologtostderr
multinode_test.go:293: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-102528 --wait=true -v=8 --alsologtostderr: (1m17.781964483s)
multinode_test.go:298: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-102528
--- PASS: TestMultiNode/serial/RestartKeepsNodes (115.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (16.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:392: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 node delete m03
multinode_test.go:392: (dbg) Done: out/minikube-darwin-amd64 -p multinode-102528 node delete m03: (15.965501211s)
multinode_test.go:398: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 status --alsologtostderr
multinode_test.go:412: (dbg) Run:  docker volume ls
multinode_test.go:422: (dbg) Run:  kubectl get nodes
multinode_test.go:430: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (16.83s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:312: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 stop
multinode_test.go:312: (dbg) Done: out/minikube-darwin-amd64 -p multinode-102528 stop: (24.527008428s)
multinode_test.go:318: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 status
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-102528 status: exit status 7 (180.933479ms)

                                                
                                                
-- stdout --
	multinode-102528
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-102528-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-102528 status --alsologtostderr
multinode_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-102528 status --alsologtostderr: exit status 7 (168.030797ms)

                                                
                                                
-- stdout --
	multinode-102528
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-102528-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1109 10:30:48.307246   29312 out.go:296] Setting OutFile to fd 1 ...
	I1109 10:30:48.307434   29312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:30:48.307439   29312 out.go:309] Setting ErrFile to fd 2...
	I1109 10:30:48.307443   29312 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I1109 10:30:48.307554   29312 root.go:334] Updating PATH: /Users/jenkins/minikube-integration/15331-22028/.minikube/bin
	I1109 10:30:48.307748   29312 out.go:303] Setting JSON to false
	I1109 10:30:48.307773   29312 mustload.go:65] Loading cluster: multinode-102528
	I1109 10:30:48.307813   29312 notify.go:220] Checking for updates...
	I1109 10:30:48.308139   29312 config.go:180] Loaded profile config "multinode-102528": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.25.3
	I1109 10:30:48.308149   29312 status.go:255] checking status of multinode-102528 ...
	I1109 10:30:48.308583   29312 cli_runner.go:164] Run: docker container inspect multinode-102528 --format={{.State.Status}}
	I1109 10:30:48.364720   29312 status.go:330] multinode-102528 host status = "Stopped" (err=<nil>)
	I1109 10:30:48.364740   29312 status.go:343] host is not running, skipping remaining checks
	I1109 10:30:48.364746   29312 status.go:257] multinode-102528 status: &{Name:multinode-102528 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1109 10:30:48.364772   29312 status.go:255] checking status of multinode-102528-m02 ...
	I1109 10:30:48.365056   29312 cli_runner.go:164] Run: docker container inspect multinode-102528-m02 --format={{.State.Status}}
	I1109 10:30:48.420117   29312 status.go:330] multinode-102528-m02 host status = "Stopped" (err=<nil>)
	I1109 10:30:48.420138   29312 status.go:343] host is not running, skipping remaining checks
	I1109 10:30:48.420145   29312 status.go:257] multinode-102528-m02 status: &{Name:multinode-102528-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (31.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:441: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-102528
multinode_test.go:450: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-102528-m02 --driver=docker 
multinode_test.go:450: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-102528-m02 --driver=docker : exit status 14 (395.58274ms)

                                                
                                                
-- stdout --
	* [multinode-102528-m02] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-102528-m02' is duplicated with machine name 'multinode-102528-m02' in profile 'multinode-102528'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:458: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-102528-m03 --driver=docker 
multinode_test.go:458: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-102528-m03 --driver=docker : (28.069993677s)
multinode_test.go:465: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-102528
multinode_test.go:465: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-102528: exit status 80 (479.492669ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-102528
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-102528-m03 already exists in multinode-102528-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:470: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-102528-m03
multinode_test.go:470: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-102528-m03: (2.627172137s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (31.63s)

                                                
                                    
x
+
TestPreload (146.06s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-103506 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4
E1109 10:35:56.428994   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-103506 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.24.4: (56.74987469s)
preload_test.go:57: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-103506 -- docker pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Done: out/minikube-darwin-amd64 ssh -p test-preload-103506 -- docker pull gcr.io/k8s-minikube/busybox: (2.574136933s)
preload_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-103506 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6
E1109 10:36:45.175347   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
preload_test.go:67: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-103506 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --kubernetes-version=v1.24.6: (1m23.533756931s)
preload_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 ssh -p test-preload-103506 -- docker images
helpers_test.go:175: Cleaning up "test-preload-103506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-103506
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-103506: (2.784954889s)
--- PASS: TestPreload (146.06s)

                                                
                                    
x
+
TestScheduledStopUnix (101.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-103733 --memory=2048 --driver=docker 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-103733 --memory=2048 --driver=docker : (27.660101228s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-103733 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-103733 -n scheduled-stop-103733
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-103733 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-103733 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-103733 -n scheduled-stop-103733
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-103733
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-103733 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-103733
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-103733: exit status 7 (116.06715ms)

                                                
                                                
-- stdout --
	scheduled-stop-103733
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-103733 -n scheduled-stop-103733
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-103733 -n scheduled-stop-103733: exit status 7 (112.479074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-103733" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-103733
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-103733: (2.306272179s)
--- PASS: TestScheduledStopUnix (101.89s)

                                                
                                    
x
+
TestSkaffold (59.97s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1557324310 version
skaffold_test.go:63: skaffold version: v2.0.1
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-103914 --memory=2600 --driver=docker 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-103914 --memory=2600 --driver=docker : (26.565678056s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:110: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1557324310 run --minikube-profile skaffold-103914 --kube-context skaffold-103914 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:110: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe1557324310 run --minikube-profile skaffold-103914 --kube-context skaffold-103914 --status-check=true --port-forward=false --interactive=false: (18.876415643s)
skaffold_test.go:116: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:342: "leeroy-app-74c64c9f4d-qzvjc" [ffb2a337-8820-428f-9bb0-4db4a365002b] Running
skaffold_test.go:116: (dbg) TestSkaffold: app=leeroy-app healthy within 5.014112023s
skaffold_test.go:119: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:342: "leeroy-web-68df95ff7c-kq5tq" [5a6dc8ec-3e3f-4ccd-8b8b-333d06c6e810] Running
skaffold_test.go:119: (dbg) TestSkaffold: app=leeroy-web healthy within 5.009606485s
helpers_test.go:175: Cleaning up "skaffold-103914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-103914
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-103914: (2.89579349s)
--- PASS: TestSkaffold (59.97s)

                                                
                                    
x
+
TestInsufficientStorage (12.27s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-darwin-amd64 start -p insufficient-storage-104014 --memory=2048 --output=json --wait=true --driver=docker 
status_test.go:50: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p insufficient-storage-104014 --memory=2048 --output=json --wait=true --driver=docker : exit status 26 (9.14468339s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c60a9db5-4ae7-431e-9a9a-2df4ae5e619d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-104014] minikube v1.28.0 on Darwin 13.0","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ef7b9563-fbee-4739-826c-ee3878969721","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=15331"}}
	{"specversion":"1.0","id":"29334ddc-62eb-4583-a59a-6a7c4e7d2b44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig"}}
	{"specversion":"1.0","id":"9f5eb773-5138-4649-841e-2f0fd26ccfef","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"9259c6ba-e4fa-4629-b794-5e0904c98390","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ffc8fe1d-675b-4a08-84c1-9c05cc342782","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube"}}
	{"specversion":"1.0","id":"f889f40b-59d3-4138-bcd7-5bdf7cb74e04","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"c10e6812-1656-486c-94fb-3937ac5123a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c28f32e1-132a-4e60-ab89-5c075f3a925d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"e75482a6-61e9-45c8-91c4-782a0c1f5ad3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"c34ef8b0-fba5-4ac9-83a0-b476a37f78c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-104014 in cluster insufficient-storage-104014","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"3c4534ac-a4ac-4b60-8dd5-6f86757eb57f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"874022f3-b2fa-48e6-9db1-fe86cc91be3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"750a5076-23e1-47ae-aed4-1343ab0c7a4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-104014 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-104014 --output=json --layout=cluster: exit status 7 (386.958757ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-104014","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-104014","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 10:40:24.381897   31030 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-104014" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p insufficient-storage-104014 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p insufficient-storage-104014 --output=json --layout=cluster: exit status 7 (389.474796ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-104014","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-104014","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1109 10:40:24.771684   31040 status.go:415] kubeconfig endpoint: extract IP: "insufficient-storage-104014" does not appear in /Users/jenkins/minikube-integration/15331-22028/kubeconfig
	E1109 10:40:24.780424   31040 status.go:559] unable to read event log: stat: stat /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/insufficient-storage-104014/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-104014" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p insufficient-storage-104014
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p insufficient-storage-104014: (2.352704775s)
--- PASS: TestInsufficientStorage (12.27s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.51s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15331
- KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1352953755/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1352953755/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1352953755/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1352953755/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (7.51s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.13s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.28.0 on darwin
- MINIKUBE_LOCATION=15331
- KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1908553039/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1908553039/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1908553039/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current1908553039/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Downloading VM boot image ...
* Starting control plane node minikube in cluster minikube
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (10.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:213: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-104552
version_upgrade_test.go:213: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-104552: (3.560180043s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.56s)

                                                
                                    
x
+
TestPause/serial/Start (89.94s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-104645 --memory=2048 --install-addons=false --wait=all --driver=docker 
E1109 10:46:45.159575   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 10:47:45.894210   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-104645 --memory=2048 --install-addons=false --wait=all --driver=docker : (1m29.940458088s)
--- PASS: TestPause/serial/Start (89.94s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (56.78s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-104645 --alsologtostderr -v=1 --driver=docker 

                                                
                                                
=== CONT  TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-104645 --alsologtostderr -v=1 --driver=docker : (56.764232583s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (56.78s)

                                                
                                    
x
+
TestPause/serial/Pause (0.87s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-104645 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.87s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.49s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-104645 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-104645 --output=json --layout=cluster: exit status 2 (490.719958ms)

                                                
                                                
-- stdout --
	{"Name":"pause-104645","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.28.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-104645","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.49s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-104645 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.04s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-104645 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-darwin-amd64 pause -p pause-104645 --alsologtostderr -v=5: (1.040322554s)
--- PASS: TestPause/serial/PauseAgain (1.04s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.91s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-104645 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-104645 --alsologtostderr -v=5: (2.904983212s)
--- PASS: TestPause/serial/DeletePaused (2.91s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.58s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-104645
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-104645: exit status 1 (56.331359ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-104645

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-104919 --no-kubernetes --kubernetes-version=1.20 --driver=docker 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-104919 --no-kubernetes --kubernetes-version=1.20 --driver=docker : exit status 14 (422.782068ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-104919] minikube v1.28.0 on Darwin 13.0
	  - MINIKUBE_LOCATION=15331
	  - KUBECONFIG=/Users/jenkins/minikube-integration/15331-22028/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/15331-22028/.minikube
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (29.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-104919 --driver=docker 
E1109 10:49:48.337092   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-104919 --driver=docker : (29.064849252s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-104919 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (29.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-104919 --no-kubernetes --driver=docker 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-104919 --no-kubernetes --driver=docker : (5.325236562s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-104919 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-104919 status -o json: exit status 2 (393.636481ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-104919","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-104919
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-104919: (2.391215644s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.45s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-104919 --no-kubernetes --driver=docker 
E1109 10:50:02.042931   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-104919 --no-kubernetes --driver=docker : (6.454023222s)
--- PASS: TestNoKubernetes/serial/Start (6.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-104919 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-104919 "sudo systemctl is-active --quiet service kubelet": exit status 1 (375.973003ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-darwin-amd64 profile list: (14.500097174s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-104919
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-104919: (1.584890609s)
--- PASS: TestNoKubernetes/serial/Stop (1.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (4.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-104919 --driver=docker 
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-104919 --driver=docker : (4.095056121s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (4.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-104919 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-104919 "sudo systemctl is-active --quiet service kubelet": exit status 1 (375.675102ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker 
E1109 10:50:29.733040   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 10:50:56.528258   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p auto-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker : (44.652833122s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-104027 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (13.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context auto-104027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-6rhd6" [ab1c8f96-ea21-4b0e-867a-dbcfaa8713c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-6rhd6" [ab1c8f96-ea21-4b0e-867a-dbcfaa8713c0] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 13.008296433s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (13.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:169: (dbg) Run:  kubectl --context auto-104027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:188: (dbg) Run:  kubectl --context auto-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:238: (dbg) Run:  kubectl --context auto-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context auto-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.109831409s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (48.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker 
E1109 10:51:45.274514   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker : (48.78930593s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (48.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:342: "kindnet-9v2xj" [7dd75ff3-9545-4428-b7df-e5fac9f6352a] Running
net_test.go:109: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.01671119s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-104027 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kindnet-104027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-x4s4w" [f8b09574-df51-4ddc-ad40-1e4de7a6034d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-x4s4w" [f8b09574-df51-4ddc-ad40-1e4de7a6034d] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.007564263s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kindnet-104027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kindnet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:238: (dbg) Run:  kubectl --context kindnet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (98.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p cilium-104028 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p cilium-104028 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker : (1m38.163612439s)
--- PASS: TestNetworkPlugins/group/cilium/Start (98.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:342: "cilium-wqpb8" [1e726c89-8cae-4b06-9262-0d93d94cb50f] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.01635738s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cilium-104028 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (13.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context cilium-104028 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-vcw8v" [3c10324d-96ff-477e-869e-33ee3a29f621] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-vcw8v" [3c10324d-96ff-477e-869e-33ee3a29f621] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 13.044257388s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (13.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (325.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-104028 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker 

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p calico-104028 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker : (5m25.696928319s)
--- PASS: TestNetworkPlugins/group/calico/Start (325.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:169: (dbg) Run:  kubectl --context cilium-104028 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:188: (dbg) Run:  kubectl --context cilium-104028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:238: (dbg) Run:  kubectl --context cilium-104028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (44.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p false-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker 
E1109 10:55:02.041985   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p false-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker : (44.495404771s)
--- PASS: TestNetworkPlugins/group/false/Start (44.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-104027 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (14.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context false-104027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-qsmlw" [c1fdf7fa-b2c1-4666-8819-7aba0d65acab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-qsmlw" [c1fdf7fa-b2c1-4666-8819-7aba0d65acab] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 14.008885132s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (14.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:169: (dbg) Run:  kubectl --context false-104027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:188: (dbg) Run:  kubectl --context false-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:238: (dbg) Run:  kubectl --context false-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:238: (dbg) Non-zero exit: kubectl --context false-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.114726059s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (43.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker 
E1109 10:55:56.523437   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 10:56:12.609215   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:12.614870   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:12.626996   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:12.648645   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:12.690864   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:12.771145   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:12.931705   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:13.253017   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:13.904277   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:15.184464   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:17.745970   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:22.866134   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:56:33.106707   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker : (43.101724251s)
--- PASS: TestNetworkPlugins/group/bridge/Start (43.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-104027 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context bridge-104027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-8chlb" [a62a9355-b725-4998-abd2-fbb5830a787d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-8chlb" [a62a9355-b725-4998-abd2-fbb5830a787d] Running
E1109 10:56:45.271062   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.007434474s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:169: (dbg) Run:  kubectl --context bridge-104027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:188: (dbg) Run:  kubectl --context bridge-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:238: (dbg) Run:  kubectl --context bridge-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker 
E1109 10:56:53.586770   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:57:22.493219   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:22.499545   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:22.511435   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:22.533332   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:22.573453   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:22.653802   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:22.814168   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:23.135963   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:23.778116   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:25.058975   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:27.619265   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:32.739826   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:57:34.547566   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 10:57:42.980434   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:58:03.460531   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker : (1m19.301211498s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-104027 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context enable-default-cni-104027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-44n49" [8075af4a-1673-4019-a2b4-41efaf079e42] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:342: "netcat-5788d667bd-44n49" [8075af4a-1673-4019-a2b4-41efaf079e42] Running
net_test.go:152: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.008479505s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:169: (dbg) Run:  kubectl --context enable-default-cni-104027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:188: (dbg) Run:  kubectl --context enable-default-cni-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:238: (dbg) Run:  kubectl --context enable-default-cni-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (49.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker 
E1109 10:58:44.420351   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 10:58:56.467069   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
net_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-104027 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker : (49.863878108s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (49.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-104027 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (13.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context kubenet-104027 replace --force -f testdata/netcat-deployment.yaml
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-877l6" [08ecc944-767e-47f5-8a1c-4016381fd7fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 10:59:20.303630   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 10:59:20.309125   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 10:59:20.319205   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 10:59:20.341256   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 10:59:20.382939   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 10:59:20.464012   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 10:59:20.624155   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 10:59:20.944428   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
helpers_test.go:342: "netcat-5788d667bd-877l6" [08ecc944-767e-47f5-8a1c-4016381fd7fc] Running
E1109 10:59:21.585573   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 10:59:22.867832   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 10:59:25.428415   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
net_test.go:152: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 13.007147095s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (13.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:169: (dbg) Run:  kubectl --context kubenet-104027 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:188: (dbg) Run:  kubectl --context kubenet-104027 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:342: "calico-node-6d7bs" [fe7343b5-0c1d-4612-a11a-f5cc91d76e78] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/ControllerPod
net_test.go:109: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.018001381s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:122: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-104028 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:138: (dbg) Run:  kubectl --context calico-104028 replace --force -f testdata/netcat-deployment.yaml

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:342: "netcat-5788d667bd-kjrpg" [ea82f920-54aa-42b6-829e-e4be7fabd086] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1109 11:00:06.339855   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
helpers_test.go:342: "netcat-5788d667bd-kjrpg" [ea82f920-54aa-42b6-829e-e4be7fabd086] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:152: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.006763991s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:169: (dbg) Run:  kubectl --context calico-104028 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:188: (dbg) Run:  kubectl --context calico-104028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:238: (dbg) Run:  kubectl --context calico-104028 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-110035 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3
E1109 11:00:38.237418   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:00:39.584680   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 11:00:42.229167   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 11:00:48.556419   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:00:56.522528   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 11:01:09.036790   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:01:12.606270   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 11:01:25.087773   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-110035 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: (55.60462161s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.60s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-110035 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [f719a728-4e61-433b-b098-078c878adc06] Pending
helpers_test.go:342: "busybox" [f719a728-4e61-433b-b098-078c878adc06] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1109 11:01:33.748871   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:01:33.753970   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:01:33.764181   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:01:33.784422   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:01:33.824493   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:01:33.905324   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:01:34.065385   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
helpers_test.go:342: "busybox" [f719a728-4e61-433b-b098-078c878adc06] Running
E1109 11:01:34.386389   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:01:35.028615   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:01:36.309585   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:01:38.869986   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.01153444s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-110035 exec busybox -- /bin/sh -c "ulimit -n"
E1109 11:01:40.305930   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-110035 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-110035 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-110035 --alsologtostderr -v=3
E1109 11:01:43.990521   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:01:45.269194   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 11:01:49.997843   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-110035 --alsologtostderr -v=3: (12.4191375s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-110035 -n no-preload-110035
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-110035 -n no-preload-110035: exit status 7 (111.727851ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-110035 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-110035 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3
E1109 11:01:54.231654   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:02:04.150675   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 11:02:14.713083   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:02:22.492065   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 11:02:50.178632   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 11:02:55.674939   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:03:08.746771   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:08.753203   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:08.765424   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:08.787616   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:08.829749   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:08.909895   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:09.070056   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:09.390199   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:10.032471   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:11.312579   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:11.917285   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:03:13.873921   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:18.994145   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:29.234350   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:03:49.714728   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:04:14.219307   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:14.224377   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:14.235613   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:14.255802   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:14.295945   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:14.378088   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:14.540335   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:14.860810   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:15.503064   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:16.784428   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:17.596425   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:04:19.344965   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:04:20.300892   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 11:04:24.465778   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-110035 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --kubernetes-version=v1.25.3: (4m59.514846415s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-110035 -n no-preload-110035
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (1.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-110019 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-110019 --alsologtostderr -v=3: (1.589102701s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (1.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-110019 -n old-k8s-version-110019: exit status 7 (112.329517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-110019 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-f4xbw" [35b010b5-eb82-4093-9a60-f53ef4a58cf7] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1109 11:06:58.068050   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:07:01.436224   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-f4xbw" [35b010b5-eb82-4093-9a60-f53ef4a58cf7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 16.0166528s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (16.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-f4xbw" [35b010b5-eb82-4093-9a60-f53ef4a58cf7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005451748s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-110035 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p no-preload-110035 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-110035 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-110035 -n no-preload-110035
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-110035 -n no-preload-110035: exit status 2 (406.923472ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-110035 -n no-preload-110035
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-110035 -n no-preload-110035: exit status 2 (407.805165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-110035 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-110035 -n no-preload-110035
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-110035 -n no-preload-110035
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-110722 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3
E1109 11:07:22.488073   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 11:07:41.693701   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-110722 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: (44.236939459s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-110722 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [e04043ed-f697-42b6-85cc-a69f159cc2d8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1109 11:08:08.742964   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
helpers_test.go:342: "busybox" [e04043ed-f697-42b6-85cc-a69f159cc2d8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.012245026s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-110722 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-110722 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-110722 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-110722 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-110722 --alsologtostderr -v=3: (12.402749077s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-110722 -n embed-certs-110722
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-110722 -n embed-certs-110722: exit status 7 (112.885253ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-110722 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (297.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-110722 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3
E1109 11:08:36.437005   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
E1109 11:09:14.216734   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:09:20.298668   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/cilium-104028/client.crt: no such file or directory
E1109 11:09:41.908783   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kubenet-104027/client.crt: no such file or directory
E1109 11:09:57.840684   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:10:02.032883   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory
E1109 11:10:25.533199   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/calico-104028/client.crt: no such file or directory
E1109 11:10:27.988545   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory
E1109 11:10:56.552132   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/addons-100328/client.crt: no such file or directory
E1109 11:11:12.646050   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 11:11:31.333027   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:31.339217   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:31.349520   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:31.370187   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:31.410914   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:31.492556   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:31.654136   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:31.974761   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:32.615057   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:33.791116   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/bridge-104027/client.crt: no such file or directory
E1109 11:11:33.897396   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:36.458203   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:41.580657   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:11:45.310594   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/functional-100827/client.crt: no such file or directory
E1109 11:11:51.821983   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:12:12.302208   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:12:22.532995   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
E1109 11:12:35.707485   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/auto-104027/client.crt: no such file or directory
E1109 11:12:53.263452   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/no-preload-110035/client.crt: no such file or directory
E1109 11:13:08.787850   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/enable-default-cni-104027/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-110722 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --kubernetes-version=v1.25.3: (4m57.500331917s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-110722 -n embed-certs-110722
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (297.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-56zwp" [21159a76-9ae0-45db-9b5d-e9c6339c49af] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-56zwp" [21159a76-9ae0-45db-9b5d-e9c6339c49af] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 13.014807647s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (13.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-56zwp" [21159a76-9ae0-45db-9b5d-e9c6339c49af] Running
E1109 11:13:45.579876   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/kindnet-104027/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006038171s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-110722 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p embed-certs-110722 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-110722 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-110722 -n embed-certs-110722
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-110722 -n embed-certs-110722: exit status 2 (406.863064ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-110722 -n embed-certs-110722
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-110722 -n embed-certs-110722: exit status 2 (440.616258ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-110722 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-110722 -n embed-certs-110722
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-110722 -n embed-certs-110722
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-111353 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-111353 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: (44.535564358s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (44.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-111353 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:342: "busybox" [8040f495-aabf-40b2-b346-982e90e27219] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:342: "busybox" [8040f495-aabf-40b2-b346-982e90e27219] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.014758934s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-111353 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-111353 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-111353 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.45s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-111353 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-111353 --alsologtostderr -v=3: (12.449135587s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-111353 -n default-k8s-diff-port-111353
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-111353 -n default-k8s-diff-port-111353: exit status 7 (112.733454ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-111353 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (302.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-111353 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3
E1109 11:15:02.076600   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/skaffold-103914/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-111353 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --kubernetes-version=v1.25.3: (5m2.186963865s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-111353 -n default-k8s-diff-port-111353
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (302.69s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-vmclm" [ae7bfa52-3458-4b73-bc44-c84377779b5c] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-vmclm" [ae7bfa52-3458-4b73-bc44-c84377779b5c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 9.014758994s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (9.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:342: "kubernetes-dashboard-57bbdc5f89-vmclm" [ae7bfa52-3458-4b73-bc44-c84377779b5c] Running

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008528818s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-111353 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p default-k8s-diff-port-111353 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.48s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-111353 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-111353 -n default-k8s-diff-port-111353
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-111353 -n default-k8s-diff-port-111353: exit status 2 (406.787504ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-111353 -n default-k8s-diff-port-111353
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-111353 -n default-k8s-diff-port-111353: exit status 2 (412.132904ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-111353 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-111353 -n default-k8s-diff-port-111353
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-111353 -n default-k8s-diff-port-111353
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-112024 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3
E1109 11:20:28.028982   22868 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/15331-22028/.minikube/profiles/false-104027/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-112024 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: (40.689275968s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.69s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-112024 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-112024 --alsologtostderr -v=3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-112024 --alsologtostderr -v=3: (12.419993463s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-112024 -n newest-cni-112024
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-112024 -n newest-cni-112024: exit status 7 (113.715725ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-112024 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-112024 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-112024 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --kubernetes-version=v1.25.3: (16.627166816s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-112024 -n newest-cni-112024
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 ssh -p newest-cni-112024 "sudo crictl images -o json"
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-112024 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-112024 -n newest-cni-112024
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-112024 -n newest-cni-112024: exit status 2 (406.908426ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-112024 -n newest-cni-112024
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-112024 -n newest-cni-112024: exit status 2 (406.855202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-112024 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-112024 -n newest-cni-112024
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-112024 -n newest-cni-112024
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.16s)

                                                
                                    

Test skip (18/295)

x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/cached-images
aaa_download_only_test.go:121: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.25.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.25.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.25.3/binaries
aaa_download_only_test.go:140: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.25.3/binaries (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:283: registry stabilized in 10.477586ms
addons_test.go:285: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:342: "registry-qlg9r" [ff651436-d679-438a-a3bf-6bacf17e09cb] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:285: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.011076739s
addons_test.go:288: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:342: "registry-proxy-cbqpc" [cf4a45d9-b210-4f74-a0ec-d9f72815abef] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:288: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.010757273s
addons_test.go:293: (dbg) Run:  kubectl --context addons-100328 delete po -l run=registry-test --now
addons_test.go:298: (dbg) Run:  kubectl --context addons-100328 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:298: (dbg) Done: kubectl --context addons-100328 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.773081541s)
addons_test.go:308: Unable to complete rest of the test due to connectivity assumptions
--- SKIP: TestAddons/parallel/Registry (15.91s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (11.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:165: (dbg) Run:  kubectl --context addons-100328 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:185: (dbg) Run:  kubectl --context addons-100328 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:198: (dbg) Run:  kubectl --context addons-100328 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:342: "nginx" [5b3a882a-7fbe-4f50-ae4f-4433b681068c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:342: "nginx" [5b3a882a-7fbe-4f50-ae4f-4433b681068c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:203: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.009556676s
addons_test.go:215: (dbg) Run:  out/minikube-darwin-amd64 -p addons-100328 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:235: skipping ingress DNS test for any combination that needs port forwarding
--- SKIP: TestAddons/parallel/Ingress (11.29s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:451: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1559: (dbg) Run:  kubectl --context functional-100827 create deployment hello-node-connect --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1565: (dbg) Run:  kubectl --context functional-100827 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:342: "hello-node-connect-6458c8fb6f-685ss" [89d2f50a-cdd6-4b71-a743-92c004cce03b] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:342: "hello-node-connect-6458c8fb6f-685ss" [89d2f50a-cdd6-4b71-a743-92c004cce03b] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1570: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.007015439s
functional_test.go:1576: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (8.12s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:543: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:97: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "flannel-104027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p flannel-104027
--- SKIP: TestNetworkPlugins/group/flannel (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel (0.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel
net_test.go:79: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:175: Cleaning up "custom-flannel-104027" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p custom-flannel-104027
--- SKIP: TestNetworkPlugins/group/custom-flannel (0.54s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-111353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-111353
--- SKIP: TestStartStop/group/disable-driver-mounts (0.46s)

                                                
                                    
Copied to clipboard